Only two countries on Earth possess thousands of nuclear warheads: the United States and Russia. Together, they account for 95 percent of the existing 20,500 weapons; no other nation has more than a few hundred. Despite the new U.S.-Russia strategic arms limitation treaty, there is plenty of room for deeper reductions in these two arsenals, including tactical nuclear weapons, which have never been covered by a treaty, and strategic nuclear weapons held in reserve.
This December will mark the 20th anniversary of the Soviet collapse and end of the Cold War, a largely peaceful finale to an enormous, costly competition between two blocs and two colossal military machines. Today’s threats are different: terrorism, cyber attacks, pandemics, proliferation and conventional wars. As Leon Panetta told the Senate Armed Services Committee at his confirmation hearing to be Secretary of Defense: “We are no longer in the Cold War. This is more like the blizzard war, a blizzard of challenges that draw speed and intensity from terrorism, from rapidly developing technologies and the rising number of powers on the world stage.”
Yet the United States and Russia, no longer adversaries, seem to be sleepwalking toward the future. Perhaps the drift is the result of the approaching election season in both countries. Unfortunately, politics makes it harder to embrace new thinking. But honestly, haven’t we learned anything in two decades?
Philippe Lopez/AFP/Getty Images
In 1989, Mikhail Gorbachev permitted elections for the first popularly elected legislature in Soviet history. The Communist Party still dominated, but about a third of the seats in the 2,250-member chamber were open, and in many of them, establishment party members were booted out. When the first session of the new Congress of People's Deputies opened on May 25, the nation was mesmerized by the televised proceedings. Work stopped on factory floors as millions of people witnessed an astonishing new phase in Gorbachev's revolution from above -- open criticism of the powers that be.
Read the rest of the article here.
DANIEL JANIN/AFP/Getty Images
It’s no secret: the international treaty that outlaws germ warfare is not much of a pact. The Biological and Toxin Weapons Convention, which entered into force in 1975, had good intentions but no teeth. There was no effective enforcement mechanism to keep countries from cheating, and there still isn’t.
From December 5-22 in Geneva, the signatories will meet for the seventh review conference, held every five years. The treaty is pretty tattered, and the review conference won’t change that. The diplomats may attempt some procedural tweaks, but there is very little in this treaty that would stop a determined effort by a country — let alone a terrorist - to build an illicit biological weapons program. The Soviet Union, Saddam Hussein’s Iraq and apartheid-era South Africa all defied the treaty in years past.
Although countries are supposed to file annual declarations, last year only 73 of the 163 nations that are parties to the agreement actually sent in their forms.
Two U.S. presidents, George W. Bush and Barack Obama, have concluded that any kind of binding, legal provision for verification would be unworkable. Their argument has been that rapid advances in biology have simply outpaced traditional measures to check against cheating. Almost all biological research is dual use; that which can be directed at improving human health can also be used to create harmful agents. Unlike nuclear weapons, biological research can be easily hidden. That’s one of the reasons to worry about illicit germ warfare, but also a factor in why the treaty has not been strengthened; verification tools like satellites and inspections can miss a well-concealed biological weapons laboratory.
Laura Kennedy, the U.S. Ambassador to the Conference on Disarmament in Geneva, and special representative on the Biological Weapons Convention, is interviewed in the current issue of by Arms Control Today on the upcoming review conference.
She said one idea percolating for the conference is to devote more attention to “health security,” such as improved surveillance, detection of disease outbreaks, and organizing rapid response. I’ve heard from other sources as well that this may be part of a U.S. initiative at the review conference. A similar discussion has already been underway in meetings in between the five year conferences. Kirk C. Bansak provides an overview of these talks in the same issue of the magazine.
No doubt, health security is important; disease knows no political boundaries and is just as threatening whether at the hands of man or Mother Nature. Rapid response and good surveillance are laudable goals, but there are already large agencies, like the World Health Organization and the Centers for Disease Control and Prevention, which worry about them.
But back to the basic problem: the treaty’s goal was to outlaw germ warfare, and it is weak. In the interview, Kennedy says “the threat of bioterrorism—we think it’s real. We think it’s important to deal with this problem in order to achieve the aim of the BWC: a world free from the threat of biological weapons.”
But how? Kennedy refers to “enhanced transparency and compliance diplomacy,” and getting more countries to submit their annual declarations, known as confidence-building measures, and revising questions on the form to be more relevant and precise. At a recent workshop in Switzerland, it was pointed out by one participant that there is no penalty for countries that failed to submit their declarations, and that many of them are never made public.
Some “confidence building” measures.
By contrast, the Chemical Weapons Convention, which came into force in 1997, has often been cited as a model disarmament treaty with effective monitoring, verification and a structure to carry it out. The treaty calls for routine inspections, but it also has a provision for a short-notice “challenge” inspections if a facility suspected of a violation. (The biological weapons treaty has no such mechanism.)
As I reported in The Dead Hand, when these challenge inspections were first negotiated in Geneva, they were very worrisome to officials at the top of the Soviet Union’s bioweapons program. In closed Kremlin meetings, they expressed fear that a challenge inspector looking for chemical weapons might point at the door of a hidden germ warfare laboratory and insist: Open! Then what could they do? The Soviet foreign minister at the time, Eduard Shevardnadze, had already agreed to the challenge inspections in the chemical weapons treaty, as a gesture of glasnost. So what happened? The bioweapons chiefs busily went about trying to conceal their work still more.
Jonathan B. Tucker points out in a new paper for the Harvard-Sussex Program that since the chemical weapons treaty came into force, no state has actually requested a challenge inspection. The reason, he says, is that it would be too confrontational and entail political risks for the accusing state. Instead, when problems arise under the treaty, they are handled by consultations; one country asks another. Tucker, who is now managing the Biosecurity Education Project at the Federation of American Scientists, said either bilateral or multilateral consultations — a process in which countries ask for clarification, instead of force their way through a door — might be useful for the biological weapons treaty, and he offers some suggestions for how it could be done.
His paper will soon be posted at the project’s site, here. Update, June 27 2011: the Tucker paper is posted here.
No doubt, a regime that’s trying to hide something nasty like a biological weapons research program may not be influenced by a nice diplomatic inquiry. That’s always the problem with voluntary verification. It is not likely to catch the worst offenders, but may be better than nothing.
The concerns that were originally behind the Biological and Toxin Weapons Convention—the horrors of germ warfare— have not disappeared. It would be nice to see more than just a talking shop at the review conference in Geneva.
Sean Gallup/Getty Images
When the Soviet Union collapsed two decades ago, it left behind hundreds of tons of highly-enriched uranium and plutonium spread across 11 time zones. Some was protected by no more than a wax seal and string, and the system for keeping track of it was a pile of paper receipts. Today, much of this material has been locked down. But things were not so certain back then.
In late 1991, Senator Sam Nunn (D-Ga.), chairman of the Senate Armed Services Committee, gave a speech saying that after spending trillions of dollars on the Cold War, the United States should spend a little more to help make Soviet weapons and materials secure. Congress was indifferent, worried more about the recession at home. One Pentagon official said he wanted the Soviets to go into “free fall.” With the critical help of Senator Richard Lugar (R-Ind.), Nunn managed to win approval of the legislation, and President George H. W. Bush signed it into law, without much enthusiasm, just weeks before the Soviet flag came down from the Kremlin.
For a few years after that, progress was achingly slow. The powerful Russian atomic energy ministry denied there was a problem. Cold War mistrust was still evident on both sides.
In The Dead Hand, I described the ground-breaking work of Kenneth J. Fairfax, who was an officer in the environment, science and technology section of the U.S. Embassy in Moscow. Fairfax was an intrepid observer, visiting many nuclear facilities, and reporting first-hand what he saw. In 1994, he sent cables to Washington which documented some of the very serious gaps in nuclear security in Russia.
These messages alarmed officials in the Clinton White House. They confirmed what some other experts feared, that Russia’s nuclear materials were widely spread and poorly secured. I had been told of these cables, but never seen them. Last month, in response to my Freedom of Information Act request, the State Department declassified and released two of the cables.
A worrisome new arms race is accelerating—in cyberspace.
This week, The Wall Street Journal broke an important story: the Pentagon has concluded computer sabotage from another nation could be considered an act of war, opening the door for the military to respond with conventional force. The decision is contained in a Defense Department strategy document, portions of which will be declassified soon. The Journal said military action against cyber attacks would come if the hackers disrupted industry or caused civilian casualties. “If you shut down our power grid, maybe we will put a missile down one of your smokestacks,” a military official told the Journal.
Additional stories followed today in The New York Times and The Washington Post.
There is a hidden risk here. When U.S. military officials threaten retaliation for cyber attacks, it sounds logical and reassuring. After all, no one wants to be vulnerable to attack. But conflict in cyberspace offers some complexity that did not exist in the nuclear arms race.
First, what is called “attribution,” or figuring out who attacked. As David Clark and Susan Landau noted in a study published by the National Academy of Sciences, “Attribution is central to deterrence, the idea that one can dissuade attackers from acting through fear of some sort of retaliation. Retaliation requires knowing with full certainty who the attackers are.”
In cyber conflict, it is not often possible to know who the attackers are. By some accounts, attacks have some from hackers operating under the umbrella of a government, but not directly controlled by it. If a group of hackers in China or Russia carried out an attack on a power grid in the United States, would we really launch a missile, risking escalation into a wider conflict? What if we were wrong?
In the nuclear arms race, we knew a lot about our adversaries, if not everything. We set up early warning systems that could track a missile trajectory. We knew where the enemy silos were located. We established “counterforce” targets that could hit those silos with great precision. The Times quoted a participant in the debates in the administration as saying, “Almost everything we learned about deterrence during the nuclear standoffs with the Soviets in the ‘60s, ‘70s and ‘80s doesn’t apply.” Exactly.
Yoshikazu Tsuno/AFP/Getty Images
When we think of nuclear warheads, we imagine those cone-shaped, threatening weapons perched atop missiles, ready to be launched, or bombs loaded aboard airplanes. These are known as operationally-deployed strategic weapons. But there are other strategic nuclear warheads that are not deployed, sitting in storage in both the United States and Russia. In fact, each country has several thousand of them. They are not covered by any treaty, and not checked by verification. There is no public accounting of the exact numbers.
Here’s a chance for President Barack Obama to take a lasting step toward his vision of a world without nuclear weapons. It’s time for both countries to get rid of these excess warheads.
The U.S. warheads were put in a reserve, or “hedge,” in 1994. This was only about three years after the collapse of the Soviet Union, and not long after Boris Yeltsin had prevailed in a violent confrontation with hardliners in parliament. William Perry, then the defense secretary, said on Sept. 20, 1994 the hedge was necessary because of a “small but real danger that reform in Russia might fail.”
Well, we are 17 years beyond that. While reform in Russia has been very rough and incomplete, it certainly did not turn into the worst-case scenario that Perry worried about.
The nuclear hedge is still around. Why?
Getty Images/Alex Wong
The information superhighway is getting crowded, and there are bandits around.
For a revealing look at the immense river of digital data that the world has generated in recent years, see the April 1 issue of Science magazine. Two researchers have attempted to estimate the global capacity to store, communicate and compute information. They found that, between 1986 and 2007, general-purpose computing capacity grew at an annual rate of 58 percent, telecommunications at 28 percent, and stored information at 23 percent. There’s also a pie chart showing that 80 percent of communications in 1986 were fixed analog — those wonderful old land-line phones!—while in 2007 global communications were 97 percent digital. The research article is complex, but chock-full of other measurements about the data onslaught.
Great benefits and some new hazards have come from this digital revolution. The upside is the immense upswing in communication, creativity, discovery and productivity. We take more photographs, read more news, search for more info, listen to more music and watch more videos with less effort than ever before in human history. Scientists can probe genomes and distant planets with tools never before available to mankind.
The hazard is that, on some days, the information superhighway looks like the road from Benghazi to Tripoli. The Stuxnet worm showed just now nasty things can become. In its annual internet security threat report, Symantec says that Stuxnet and another attack mechanism, Hydraq, were last year’s standout malware. Hydraq was attempting to steal intellectual property from major corporations; Stuxnet was apparently designed to disrupt Iran’s nuclear enrichment process. According to Symantec, both will, unfortunately, be useful in teaching programmers how to do it again. Overall, Symantec says it recorded over 3 billion malware attacks last year.
Nations are starting to wake up to this new battlefield, too. There’s an interesting series of essays in the Spring edition of Strategic Studies Quarterly, which is published out of the Air University at Maxwell Air Force Base, on the implications of cyber conflict. In one piece [pdf], Christopher Bronk imagines the use of cyberwar by China in the year 2020. This is a clever and fascinating exercise in futurology. In another article [pdf], Chris C. Demchak and Peter Dombrowski argue that the global cyber battlefields are already being fortified. While we like to think of the internet as a borderless space, they report otherwise:
Today we are seeing the beginnings of the border-making process across the world’s nations. From the Chinese intent to create their own controlled internal Internet, to increasingly controlled access to the Internet in less-democratic states, to the rise of Internet filters and rules in Western democracies, states are establishing the bounds of their sovereign control in the virtual world in the name of security and economic sustainability…
The consensus among states changed after Stuxnet. If such malicious software can take down whole energy systems at once, states have no choice but to respond if they are to protect their own governmental and military operations and uphold their responsibility to protect citizens and corporations. The Stuxnet method and its success thus changed the notion of vulnerability across increasingly internetted societies and critical infrastructures. The days of cyber spying through software backdoors or betrayals by trusted insiders, vandalism, or even theft had suddenly evolved into the demonstrated ability to deliver a potentially killing blow without being anywhere near the target. Forcing nuclear centrifuges to oscillate out of control from an unknown and remote location suggests that future innovations might be able to destroy or disrupt other critical infrastructures upon which modern societies depend.
In earlier decades, nuclear, chemical and biological weapons, as well as conventional arms, have been subject to arms control treaties that attempted to limit the creation of the weapons and their use. The treaties weren’t perfect: some were violated, some were weak and lacked enforcement. A good question that needs to be debated today is whether it is possible or desirable to create arms control agreements to limit cyber conflict. As I pointed out in a recent article in FP, cyber conflict exists in a shadowy, unaccountable world, not easily limited by treaties.
Elisabeth Fischer, writing for army-technology.com, has asked a series of experts on whether the time has come for rules of cyber warfare like those that govern conventional warfare. She found a lot of conflicting views.
In January, Karl Frederick Rauscher and Andrey Korotkov led a Russian-American study by the East-West Institute on whether the Geneva and Hague Conventions could be adapted to cyber space. The study pointed out that so-called critical infrastructure — things that are necessary for the basic welfare of civilian populations — are often quite difficult to separate from other facilities when it comes to cyberspace. An attack on a power grid or computer network could take down both hospitals as well as military targets. Can these be separated in a cyber conflict? Questions like that are still unanswered.
Another plunge into the legal issues around cyberwar is offered in Strategic Studies Quarterly by Prof. Charles J. Dunlap Jr., of Duke University. He argues [pdf] that the tenets of the law of armed conflict are “sufficient” to address most of the important issues of cyber war. The problem is not so much law, he says, as the inherent uncertainty of war and targeting.
The fog of war exists in cyberspace too.
Jim Watson/AFP/Getty Images
One of the most remarkable advances against disease and death was the invention of antibiotics, which led to a massive and immediate decline in death from infections. In a paper published last fall in a workshop report of the Institute of Medicine, Brad Spellberg, an associate professor of medicine at UCLA, noted that antibiotics led U.S. deaths to decline by about 220 per 100,000 population over 15 years, from the late 1930s to the early 150s. This period includes the introduction of penicillin.
By contrast, he reported, subsequent medical advances over the next 45 years resulted in only minor further reductions in deaths by infections—about an additional 20 per 100,000 people.
Antibiotics caused a revolution in medicine, and gave hope to millions of people who might otherwise have died from infections. They allowed the conduct of complicated and deeply invasive surgery, and organ transplants, which would not have been possible without effective antibacterial agents to deal with infections.
Spellberg recalls the words of Dr. Lewis Thomas, one of the most prominent physicians of the 20th century, on the arrival of the first antibiotics. In a memoir of his internship, Thomas wrote:
For most of the infectious diseases on the wards of Boston City Hospital in 1937, there was nothing that could be done beyond bed rest and good nursing care. Then came the explosive news of sulfanilamide, and the start of the real evolution in medicine. I remember the astonishment when the first cases of pneumococcal and streptococcal septicemia were treated in Boston in 1937. The phenomenon was almost beyond belief. Here were moribund patients, who would surely have died without treatment, improving ... within a matter of hours ... and feeling entirely well within the next day ... we became convinced, overnight, that nothing lay beyond reach for the future. Medicine was off and running.
But the antibiotic revolution seems to be running out of steam. The number of new antibiotic drugs has dwindled. Meanwhile, microbes continue to evolve, developing resistance to existing drugs. Some of these bacteria, like Methicillin-resistant Staphylococcus aureus, or MRSA, are lethal.
There are economic reasons why the drug pipeline has been drying up. Among them: pharmaceutical companies don’t get as high a return on investment for products taken for just a few weeks, compared to those for chronic disease, so there are not strong incentives to invest the millions of dollars in developing new antibiotics.
Now comes a report of a promising new approach. In a paper just published by Nature Chemistry, [abstract] a team of researchers has developed biodegradable nanoparticles, super-small particles which attack the microbe’s cell membrane “selectively and efficiently,” in effect poking holes in the membrane. They destroy the infection, without hurting healthy cells. In an article in the Wall Street Journal today describing the technology, James L. Hedrick of International Business Machines Corp., one of the researchers, said the destruction of the bacteria renders it unable to develop resistance to the nanoparticles. The paper says the technology can work against many different infections, including MRSA, which is responsible for some 19,000 hospital-stay related deaths per year in the United States. It hasn’t been tested yet in humans, but if further trials and research show it works, nanoparticles could eventually open a new avenue to fight a real and ongoing threat.
David E. Hoffman is a Pulitzer Prize-winning author and a contributing editor to Foreign Policy.