Hackers plan space satellites to combat censorship
by David Meyer / 4 January 2012

The scheme was outlined at the Chaos Communication Congress in Berlin. The project’s organisers said the Hackerspace Global Grid will also involve developing a grid of ground stations to track and communicate with the satellites. Longer term they hope to help put an amateur astronaut on the moon. Hobbyists have already put a few small satellites into orbit – usually only for brief periods of time – but tracking the devices has proved difficult for low-budget projects. The hacker activist Nick Farr first put out calls for people to contribute to the project in August. He said that the increasing threat of internet censorship had motivated the project. “The first goal is an uncensorable internet in space. Let’s take the internet out of the control of terrestrial entities,” Mr Farr said. He cited the proposed Stop Online Piracy Act (SOPA) in the United States as an example of the kind of threat facing online freedom. If passed, the act would allow for some sites to be blocked on copyright grounds.

Beyond balloons
Although space missions have been the preserve of national agencies and large companies, amateur enthusiasts have launched objects into the heavens. High-altitude balloons have also been used to place cameras and other equipment into what is termed “near space”. The balloons can linger for extended amounts of time – but are not suitable for satellites. The amateur radio satellite Arissat-1 was deployed into low earth orbit last year via a spacewalk by two Russian cosmonauts from the International Space Station as part of an educational project. Students and academics have also launched other objects by piggybacking official rocket launches. However, these devices have often proved tricky to pinpoint precisely from the ground. According to Armin Bauer, a 26-year-old enthusiast from Stuttgart who is working on the Hackerspace Global Grid, this is largely due to lack of funding. “Professionals can track satellites from ground stations, but usually they don’t have to because, if you pay a large sum [to send the satellite up on a rocket], they put it in an exact place,” Mr Bauer said. In the long run, a wider hacker aerospace project aims to put an amateur astronaut onto the moon within the next 23 years. “It is very ambitious so we said let’s try something smaller first,” Mr Bauer added.

Ground network
The Berlin conference was the latest meeting held by the Chaos Computer Club, a decades-old German hacker group that has proven influential not only for those interested in exploiting or improving computer security, but also for people who enjoy tinkering with hardware and software. When Mr Farr called for contributions to Hackerspace, Mr Bauer and others decided to concentrate on the communications infrastructure aspect of the scheme. He and his teammates are working on their part of the project together with Constellation, an existing German aerospace research initiative that mostly consists of interlinked student projects. In the open-source spirit of Hackerspace, Mr Bauer and some friends came up with the idea of a distributed network of low-cost ground stations that can be bought or built by individuals. Used together in a global network, these stations would be able to pinpoint satellites at any given time, while also making it easier and more reliable for fast-moving satellites to send data back to earth. “It’s kind of a reverse GPS,” Mr Bauer said. “GPS uses satellites to calculate where we are, and this tells us where the satellites are. We would use GPS co-ordinates but also improve on them by using fixed sites in precisely-known locations.” Mr Bauer said the team would have three prototype ground stations in place in the first half of 2012, and hoped to give away some working models at the next Chaos Communication Congress in a year’s time. They would also sell the devices on a non-profit basis. “We’re aiming for 100 euros (£84) per ground station. That is the amount people tell us they would be willing to spend,” Mr Bauer added.

Experts say the satellite project is feasible, but could be restricted by technical limitations. “Low earth orbit satellites such as have been launched by amateurs so far, do not stay in a single place but rather orbit, typically every 90 minutes,” said Prof Alan Woodward from the computing department at the University of Surrey. “That’s not to say they can’t be used for communications but obviously only for the relatively brief periods that they are in your view. It’s difficult to see how such satellites could be used as a viable communications grid other than in bursts, even if there were a significant number in your constellation.” This problem could be avoided if the hackers managed to put their satellites into geostationary orbits above the equator. This would allow them to match the earth’s movement and appear to be motionless when viewed from the ground. However, this would pose a different problem. “It means that they are so far from earth that there is an appreciable delay on any signal, which can interfere with certain Internet applications,” Prof Woodward said. “There is also an interesting legal dimension in that outer space is not governed by the countries over which it floats. So, theoretically it could be a place for illegal communication to thrive. However, the corollary is that any country could take the law into their own hands and disable the satellites.”

Need for knowledge
Apart from the ground station scheme, other aspects of the Hackerspace project that are being worked on include the development of new electronics that can survive in space, and the launch vehicles that can get them there in the first place. According to Mr Farr, the “only motive” of the Hackerspace Global Grid is knowledge. He said many participants are frustrated that no person has been sent past low Earth orbit since the Apollo 17 mission in 1972. “This [hacker] community can put humanity back in space in a meaningful way,” Farr said. “The goal is to get back to where we were in the 1970s. Hackers find it offensive that we’ve had the technology since before many of us were born and we haven’t gone back.” Asked whether some might see negative security implications in the idea of establishing a hacker presence in space, Farr said the only downside would be that “people might not be able to censor your internet. Hackers are about open information,” Farr added. “We believe communication is a human right.”


by David Meyer  /  January 3, 2012

Hackers have announced work on a ground station scheme that would make amateur satellites more viable, as part of an aerospace scheme that ultimately aims for the moon. The Hackerspace Global Grid (HGG) project hopes to make it possible for amateurs to more accurately track the home-brewed satellites. As these devices tend to be launched by balloon, they are not placed at a precise point in orbit as professional satellites deployed by rocket usually are. Armin Bauer, one of the three German hobbyists involved in the HGG, said at the Chaos Communication Congress in Berlin that the system involved a reversal of the standard GPS technique. The scheme was announced at the event, which is Europe’s largest hacker conference. “GPS uses satellites to calculate where we are, and this tells us where the satellites are,” Bauer said on Friday, according to the BBC. “We would use GPS co-ordinates but also improve on them by using fixed sites in precisely-known locations.”

According to the HGG website, enthusiasts would site the ground stations using coordinates not only from the US’s GPS system, but also those from the EU’s Galileo, Russia’s GLONASS and ground surveys. A major aim of the wider ‘Hacker Space Program’ is to create a satellite system for internet communication that is uncensorable by any country. The hackers also want to put someone on the moon by 2034 — something that has not been done since the Apollo 17 mission 39 years ago. Bauer described the moon mission as “very ambitious”. As for the anti-censorship aspects of the scheme, the HGG team said on their site that they are “not yet in a technical position to discuss details”. They also noted that the modular ground stations, which are intended to work out at a non-profit sales price of €100 (£84) each, would be able to work without the internet. “Then you will have to deploy four receiver stations and connect them to your laptop(s) or collect all storage media added to them, where all received data is stored on,” the team wrote. “Then you have to manage the data handling and processing by your own.” However, internet connectivity is the plan for most of the HGG’s usage. The team is working on the project alongside Constellation, an German aerospace research platform for academics that would use the distributed network to derive crucial data.

According to Bauer and his colleagues, the internet connectivity would be of “bare minimum” bandwidth that would be enough to keep basic communications going if needed. “The first step is establishing a means of accurate synchronisation for the distributed network,” the team explained. “Next up are building various receiver modules (ADS-B, amateur satellites, etc) and data processing of received signals. A communication/control channel (read: sending data) is a future possibility but there are no fixed plans on how this could be implemented yet.” The HGG team hopes to have working prototypes in the first half of the year, with production units ready for distribution by the end of 2012. These would be sold, but people would be able to build their own as well. If the Hacker Space Program really does take off, the satellites would be out of any country’s legal jurisdiction, but this would also leave any country that is capable of doing so free to disable them in some way. The HGG team admitted on their site that there would nothing they could do to stop this happening. “Since we don’t have actual satellites yet, this falls in the category of problems we’re going to solve once they occur,” they wrote. “We’re doing this because we want to and because it’s fun. We’re trying to concentrate on reasons why this will work, not why it won’t.”

Building a Distributed Satellite Ground Station Network – A Call To Arms
Hackers need satellites. Hackers need internet over satellites. Satellites require ground stations. Let’s build them!

As proposed by Nick Farr et al at CCCamp11, we – the hacker community – are in desperate need for our own communication infrastructure. So here we are, answering the call for the Hacker Space Program with our proposal of a distributed satellite communications ground station network. An affordable way to bring satellite communications to a hackerspace near you. We’re proposing a multi-step approach to work towards this goal by setting up a distributed network of ground stations which will ensure a 24/7 communication window – first tracking, then communicating with satellites. The current state of a proof of concept implementation will be presented. This is a project closely related to the academic femto-satellite movement, ham radio, Constellation@Home.

The area of small satellites (femto-satellite <0.1 kg up to mini-satellite 100-500 kg) is currently pressed forward by Universities and enables scientific research at a small budget. Gathered data, both scientific and operational, requires communication between satellites and ground stations as well as to the final recipients of the data. One either has to establish own transmission stations or rent already existing stations. The project “distributed ground station” is an extension to the project which will offer, at its final expansion state, the ability to receive data from satellites and relay them to the final recepients. It is therefore proposed that a world-wide distributed network of antennas is to be set up which will be connected via the internet allowing the forwarding of received signals to a central server which will in turn forward signals to further recepients. Individual antennas will be set up by volunteers (Citizen Scientists) and partner institutions (Universities, institutes, companies). The core objective of the project is to develop an affordable hardware platform (antenna and receiver) to be connected to home computers as well as the required software. This platform should enable everyone to receive signals from femto-satellites at a budget and in doing so, eradicating black patches where there is currently no ground station to receive signals of satellites passing over-head. Emphasise is put on contributions by volunteers and ham radio operators who can contribute both passively by setting up a receiver station or actively by shaping the project making it a community driven effort powered by open-source hardware and applications.

Purposes The distributed ground stations will enable many different uses. Using distributed ground stations one could receive beacon signals of satellites and triangulate their position and trajectory. It would therefore be possible to determine the kepler elements right after launching of a new satellite without having to rely on official reports made at low frequency. Beacon tracking is also not limited to just satellites but can be used to track other objects like weather balloons and areal drones and record their flight paths. Additionally, beacon signals (sender ID, time, transmission power) could be augmented with house-keeping data to allow troubleshooting in cases where a main data feed is interrupted. Details regarding the protocol and maximum data packet length are to be defined during the feasibility study phase. Furthermore, distributed ground stations can be used as “data dumping” receivers. This can be used to reduce load on the main ground station as well as to more quickly distribute data to final recipients. The FunCube project, an out-reach project to schools, is already using a similar approach. Another expansion stage would be increasing the bandwidth of the individual receivers. As a side-effect, distributed ground station could also be used to analyse meteorite scattering and study effects in the ionosphere by having a ground-based sender with a known beacon signal to be reflected off meteorites and/or the iononosphere and in turn received by the distributed ground stations. Depending on the frequency used further applications in the field of atmospheric research, eg. local and regional properties of the air and storm clouds, can be imagined. Depending on local laws and guidelines, antennas could also be used to transmit signals. The concept suggests the following expansion stages:

  1. Feasibility study for the individual expansion stages
  2. Beacon-Tracking and sender triangulation
  3. Low-bandwidth satellite-data receiver (up to 10 Kbit/s)
  4. High-bandwidth satellite-data receiver (up to 10 Mbit/s)
  5. Support for data transmission Each stage is again split up into sub-projects to deal with hardware and software design and develoment, prototyping, testing and batch/mass production, Network The networking concept demands that all distributed ground stations are to be connected via the internet. This can be achieved using the Constellation platform. Constellation is a distributed computing project used already for various simulations related to aerospace applications. The system is based on computation power donated by volunteers which is combined to effectively build a world-wide distributed super-computer. The software used to do this is BOINC (Berkeley Open Infrastructure for Network Computing) which also offers support for additional hardware to eg. establish a sensor network. Another BOINC-project is the Quake Quatcher Network which is using accelleration sensors built into laptops or custom USB-dongles to detected earthquakes. Constellation could be enhanced to allow use of the distributed ground station hardware. Constellation is an academic student group of the DGLR (german aerospace society) at Stuttgart University and is supported by e.V and Selfnet e.V.. Ham radio and volunteers Special consideration is given to the ham radio community. Femto-satellites make use of the ham radio bands in the UHF, VHF, and S-Band range. As a part of the ham radio community ham radio operators should be treated as part of the network. Ham radio operators hold all required knowledge about the technology required to operate radio equipment and are also well distributed world-wide. To also make the system attractive to volunteers, hardware should be designed in a way that allows manufacturing and distribution on a budget. All designs should also be made public to allow own and improved builds of the system by the community. The hardware should be designed to be simple to use correctly and hard to be used wrong.

    [1] Constellation Plattform, [2] shackspace Stuttgart, References [1] IRS Kleinsatelliten, Universität Stuttgart, [2] Constellation Plattform, [3] BOINC, Berkely University, [4] Quake Catcher Network, [5] DGLR Bezirksgruppe Stuttgart, [6] e.V., [7] Selfnet e.V.,


The Darknet Project: netroots activists dream of global mesh network
A low-power, open source, open hardware mesh access point

Netroots activists dream of global mesh network
by Ryan Paul

A group of Internet activists gathered last week in an Internet Relay Chat (IRC) channel to begin planning an ambitious project—they hope to overcome electronic surveillance and censorship by creating a whole new Internet. The group, which coordinates its efforts through the Reddit social networking site, calls its endeavor The Darknet Project (TDP). The goal behind the project is to create a global darknet, a decentralized web of interconnected wireless mesh networks that operate independently of each other and the conventional internet. In a wireless mesh network, individual nodes can relay data for other nodes, ensuring that the routing of data remains robust as nodes on the network are added and removed. The idea behind TDP is that such a network would be resistant to censorship and shutdown because there would be no central point of control over the infrastructure. “Basically, the goal of the darknet plan project is to create an alternative, more free internet through a global mesh network,” explained a TDP organizer who goes by the Internet handle ‘Wolfeater.’ “To accomplish this, we will establish local meshes and connect them via current infrastructure until our infrastructure begins to reach other meshes.”

TDP seems to have been influenced in part by an earlier unofficial effort launched by the Internet group Anonymous called Operation Mesh. The short-lived operation, which was conceived as a response to the Anti-Counterfeiting Trade Agreement (ACTA) and its potential impact on Internet infrastructure, called for supporters to create a parallel Internet of wireless mesh networks. The idea is intriguing, but it poses major technical and logistical challenges, and it’s hard to imagine that TDP will ever move beyond the conceptual stage. The group behind the effort is big on ideas but short on technical solutions for rolling out a practical implementation. During the IRC meeting, they struggled to coordinate a simple discussion about how to proceed with their agenda. Still, despite TDP’s dysfunctional organizational structure and lack of concrete strategy, their message seems to resonate with an audience on the Internet. And enthusiasm for mesh networks and decentralized Internet isn’t isolated to the tinfoil hat crowd; serious government programs aim at producing similar technology. Earlier this year, the New York Times reported on a US government-funded program to create wireless mesh networks that could help dissidents circumvent political censorship in authoritarian countries. As repressive governments continue to get better at thwarting circumvention of their censorship tools, dissidents will need more robust tools of their own to continue propagating information. The US State Department seems to view decentralized darknets as an important area of research for empowering free expression abroad.

A growing number of independent open source software projects have also emerged to fill the need for darknet technology. Many of these projects are backed by credible non-profit organizations and segments of the security research community. Such projects could find a useful ally in the TDP if they were to engage with the growing community and help mobilize its members in a constructive direction. Unlike TDP, the original Operation Mesh coordinators had specific technologies in mind: they highlighted the I2P anonymous network layer software and the BATMAN ad-hoc wireless routing protocol as the best prospective candidates. Both projects are actively maintained and have modest communities, though the I2P website is currently down. Promising projects like Freenet develop software for building darknets on top of existing Internet infrastructure. Another group that might benefit from broader community support is Serval, a project to create ad-hoc wireless mesh networks using regular smartphones. The group has recently developed a software prototype that runs on Android handsets. They are actively looking for volunteers to help test the software and participate in anumber of other ways. TDP members who are serious about fostering decentralized Internet infrastructure could meaningfully advance their goals by assisting any of the previously mentioned projects. The growing amount of popular grassroots support for Internet decentralization suggests that the momentum behind darknets is increasing.

Anonymous “dimnet” tries to create hedge against DNS censorship
by Sean Gallagher

With concern mounting over the potential impact of the Stop Online Piracy Act and claims that it could make the Domain Name Service more vulnerable, one group is looking to circumvent the threat of domain name blocking and censorship by essentially creating a new Internet top-level domain outside of ICANN control. Called Dot-BIT, the effort currently uses proxies, cryptography, and a small collection of DNS servers to create a section of the Internet’s domain address space where domains can be provisioned, moved, and traded anonymously. So far, over 4,000 domains have been registered within Dot-BIT’s .bit virtual top level domain (TLD). Those domains are visible only to people who use a proxy service that draws address information from the project’s distributed database, or to those using one of the project’s two public DNS servers. While it’s not exactly a “darknet” like the Tor anonymizing network’s .onion domain, .bit isn’t exactly part of the open Internet, either—call it a “dimnet.” Just how effective a virtual top-level domain will be in preventing censorship by ISPs and governments—or even handling a rapidly growing set of registered domains—is unclear at best.

How it works
Dot-BIT is derived from a peer-to-peer network technology called Namecoin, derived from the Bitcoin digital currency technology. Just as with Bitcoin, the system is driven by cryptographic tokens, called namecoins. Tobuy an address in that space, you either have to “mine” namecoins by providing compute time (running client software that uses the computer’s CPU or graphics processing unit) to handle the processing of transactions within the network, or buy them through an exchange with cash or Bitcoins. All of those approaches essentially provide support to the Namecoin distributed name system’s infrastructure. You can also get an initial payout of free namecoins from a “faucet” site designed to help bootstrap the network. The cost of entry is pretty low: currently, registering a new domain costs about 1.6 namecoins, which can be had for about five cents. Your registration isn’t associated with your name, address, and phone number—instead, it’s linked to your cryptographic identity, preserving anonymity. Once you’ve registered a domain, you can assign it by sending out a JSON-formatted update request, mapping the domain to a DNS or providing IP addresses and host names to be distributed through Dot-BIT’s proxies and public DNS servers. That information is then spread across all of the network’s peer systems.

Simple, right?
Namecoin’s approach heavily favors early adopters, since once you’ve registered a domain, you can transfer it to someone else—or squat on it until someone pays you for it. That seems to be what a lot of early .bit adopters are counting on. For example, using Firefox and the FoxyProxy add-on to surf .bit-land to audi.bit lands you on a “this domain for sale” page. But while Dot-BIT may allow for an anonymous and relatively secure exchange of DNS information, it won’t necessarily prevent censorship by ISPs. If the .bit top-level domain becomes the target of laws like SOPA, it can be shut down pretty quickly by cutting off the head—its own internal DNS—either through port blocking or other filtering. And since it lacks the anonymizing routing abilities of “hidden” networks like Tor’s .onion domain, it won’t protect the identities of publishers and users who visit sites that use a .bit name. At the moment, then, it’s not certain what purpose .bit will actually serve, other than as an experiment in novel ways to create a DNS—or someplace for hackers to spend their illicitly earned Bitcoins.


It’s time to update/widen the term to accommodate a wider range of modern activity.  A darknet:

is a closed, private communications network that is used for purposes not sanctioned by the state (aka illegal).

Darknets can be built in the following ways:

  • Software.  A virtual, encrypted network that runs over public network infrastructure (most of the US government/economy uses this method).
  • Hardware.  A parallel physical infrastructure.  This hardware can be fiber optic cables or wireless.  Parallel wireless infrastructures (whether for cell phones or Internet access are fairly inexpensive to build and conceal).
  • In most cases, we see a mix of the two.

Examples of Darknets:

  • The Zetas have built a huge wireless darknet (a private, parallel communications network) that connects the majority of Mexico’s states.  Most of the other cartels also have wireless darknets and there are also lots of local darknets.
  • Hezbollah (in Lebanon) runs its own fiber optic network.
  • TOR.  A voluntary, decentralized ad hoc network that anonymizes network connections.
  • Botnets (up to 4 m computers strong) that can be used for global private communications.
  • Etc.  The list goes on  and on….

The future?  Darknets that power alternative economies.  A network layer for accelerating the dark globalization of the $10 Trillion System D.

The Shadow Superpower
Forget China: the $10 trillion global black market is the world’s fastest growing economy — and its future.
by Robert Neuwirth / 10.28.2011

With only a mobile phone and a promise of money from his uncle, David Obi did something the Nigerian government has been trying to do for decades: He figured out how to bring electricity to the masses in Africa’s most populous country. It wasn’t a matter of technology. David is not an inventor or an engineer, and his insights into his country’s electrical problems had nothing to do with fancy photovoltaics or turbines to harness the harmattan or any other alternative sources of energy. Instead, 7,000 miles from home, using a language he could hardly speak, he did what traders have always done: made a deal. He contracted with a Chinese firm near Guangzhou to produce small diesel-powered generators under his uncle’s brand name, Aakoo, and shipped them home to Nigeria, where power is often scarce. David’s deal, struck four years ago, was not massive — but it made a solid profit and put him on a strong footing for success as a transnational merchant. Like almost all the transactions between Nigerian traders and Chinese manufacturers, it was also sub rosa: under the radar, outside of the view or control of government, part of the unheralded alternative economic universe of System D.

You probably have never heard of System D. Neither had I until I started visiting street markets and unlicensed bazaars around the globe. System D is a slang phrase pirated from French-speaking Africa and the Caribbean. The French have a word that they often use to describe particularly effective and motivated people. They call them débrouillards. To say a man is a débrouillard is to tell people how resourceful and ingenious he is. The former French colonies have sculpted this word to their own social and economic reality. They say that inventive, self-starting, entrepreneurial merchants who are doing business on their own, without registering or being regulated by the bureaucracy and, for the most part, without paying taxes, are part of “l’economie de la débrouillardise.” Or, sweetened for street use, “Systeme D.” This essentially translates as the ingenuity economy, the economy of improvisation and self-reliance, the do-it-yourself, or DIY, economy. A number of well-known chefs have also appropriated the term to describe the skill and sheer joy necessary to improvise a gourmet meal using only the mismatched ingredients that happen to be at hand in a kitchen. I like the phrase. It has a carefree lilt and some friendly resonances. At the same time, it asserts an important truth: What happens in all the unregistered markets and roadside kiosks of the world is not simply haphazard. It is a product of intelligence, resilience, self-organization, and group solidarity, and it follows a number of well-worn though unwritten rules. It is, in that sense, a system.

It used to be that System D was small — a handful of market women selling a handful of shriveled carrots to earn a handful of pennies. It was the economy of desperation. But as trade has expanded and globalized, System D has scaled up too. Today, System D is the economy of aspiration. It is where the jobs are. In 2009, the Organisation for Economic Co-operation and Development (OECD), a think tank sponsored by the governments of 30 of the most powerful capitalist countries and dedicated to promoting free-market institutions, concluded that half the workers of the world — close to 1.8 billion people — were working in System D: off the books, in jobs that were neither registered nor regulated, getting paid in cash, and, most often, avoiding income taxes.

Kids selling lemonade from the sidewalk in front of their houses are part of System D. So are many of the vendors at stoop sales, flea markets, and swap meets. So are the workers who look for employment in the parking lots of Home Depot and Lowe’s throughout the United States. And it’s not only cash-in-hand labor. As with David Obi’s deal to bring generators from China to Nigeria, System D is multinational, moving all sorts of products — machinery, mobile phones, computers, and more — around the globe and creating international industries that help billions of people find jobs and services. In many countries — particularly in the developing world — System D is growing faster than any other part of the economy, and it is an increasing force in world trade. But even in developed countries, after the financial crisis of 2008-09, System D was revealed to be an important financial coping mechanism. A 2009 study by Deutsche Bank, the huge German commercial lender, suggested that people in the European countries with the largest portions of their economies that were unlicensed and unregulated — in other words, citizens of the countries with the most robust System D — fared better in the economic meltdown of 2008 than folks living in centrally planned and tightly regulated nations. Studies of countries throughout Latin America have shown that desperate people turned to System D to survive during the most recent financial crisis. This spontaneous system, ruled by the spirit of organized improvisation, will be crucial for the development of cities in the 21st century. The 20th-century norm — the factory worker who nests at the same firm for his or her entire productive life — has become an endangered species. In China, the world’s current industrial behemoth, workers in the massive factories have low salaries and little job security. Even in Japan, where major corporations have long guaranteed lifetime employment to full-time workers, a consensus is emerging that this system is no longer sustainable in an increasingly mobile and entrepreneurial world.

So what kind of jobs will predominate? Part-time work, a variety of self-employment schemes, consulting, moonlighting, income patching. By 2020, the OECD projects, two-thirds of the workers of the world will be employed in System D. There’s no multinational, no Daddy Warbucks or Bill Gates, no government that can rival that level of job creation. Given its size, it makes no sense to talk of development, growth, sustainability, or globalization without reckoning with System D. The growth of System D presents a series of challenges to the norms of economics, business, and governance — for it has traditionally existed outside the framework of trade agreements, labor laws, copyright protections, product safety regulations, antipollution legislation, and a host of other political, social, and environmental policies. Yet there’s plenty that’s positive, too. In Africa, many cities — Lagos, Nigeria, is a good example — have been propelled into the modern era through System D, because legal businesses don’t find enough profit in bringing cutting- edge products to the third world. China has, in part, become the world’s manufacturing and trading center because it has been willing to engage System D trade. Paraguay, small, landlocked, and long dominated by larger and more prosperous neighbors, has engineered a decent balance of trade through judicious smuggling. The digital divide may be a concern, but System D is spreading technology around the world at prices even poor people can afford. Squatter communities may be growing, but the informal economy is bringing commerce and opportunity to these neighborhoods that are off the governmental grid. It distributes products more equitably and cheaply than any big company can. And, even as governments around the world are looking to privatize agencies and get out of the business of providing for people, System D is running public services — trash pickup, recycling, transportation, and even utilities.

Just how big is System D? Friedrich Schneider, chair of the economics department at Johannes Kepler University in Linz, Austria, has spent decades calculating the dollar value of what he calls the shadow economies of the world. He admits his projections are imprecise, in part because, like privately held businesses everywhere, businesspeople who engage in trade off the books don’t want to open their books (most successful System D merchants are obsessive about profit and loss and keep detailed accounts of their revenues and expenses in old-fashioned ledger books) to anyone who will write anything in a book. And there’s a definitional problem as well, because the border between the shadow and the legal economies is blurry. Does buying some of your supplies from an unlicensed dealer put you in the shadows, even if you report your profit and pay your taxes? How about hiding just $1 in income from the government, though the rest of your business is on the up-and-up? And how about selling through System D even if your business is in every other way in compliance with the law? Finding a firm dividing line is not easy, as Keith Hart, who was among the first academics toacknowledge the importance of street markets to the economies of the developing world, warned me in a recent conversation: “It’s very difficult to separate the nice African ladies selling oranges on the street and jiggling their babies on their backs from the Indian gangsters who control the fruit trade and who they have to pay rent to.” Schneider suggests, however, that, in making his estimates, he has this covered. He screens out all money made through “illegal actions that fit the characteristics of classical crimes like burglary, robbery, drug dealing, etc.” This means that the big-time criminals are likely out of his statistics, though those gangsters who control the fruit market are likely in, as long as they’re not involved in anything more nefarious than running a price-fixing cartel. Also, he says, his statistics do not count “the informal household economy.” This means that if you’re putting buckles on belts in your home for a bit of extra cash from a company owned by your cousin, you’re in, but if you’re babysitting your cousin’s kids while she’s off putting buckles on belts at her factory, you’re out.

Schneider presents his numbers as a percentage of the total market value of goods and services made in each country that same year — each nation’s gross domestic product. His data show that System D is on the rise. In the developing world, it’s been increasing every year since the 1990s, and in many countries it’s growing faster than the officially recognized gross domestic product (GDP). If you apply his percentages (Schneider’s most recent report, published in 2006, uses economic data from 2003) to the World Bank’s GDP estimates, it’s possible to make a back-of-the-envelope calculation of the approximate value of the billions of underground transactions around the world. And it comes to this: The total value of System D as a global phenomenon is close to $10 trillion. Which makes for another astonishing revelation. If System D were an independent nation, united in a single political structure — call it the United Street Sellers Republic (USSR) or, perhaps, Bazaaristan — it would be an economic superpower, the second-largest economy in the world (the United States, with a GDP of $14 trillion, is numero uno). The gap is narrowing, though, and if the United States doesn’t snap out of its current funk, the USSR/Bazaaristan could conceivably catch it sometime this century. In other words, System D looks a lot like the future of the global economy. All over the world — from San Francisco to São Paulo, from New York City to Lagos — people engaged in street selling and other forms of unlicensed trade told me that they could never have established their businesses in the legal economy. “I’m totally off the grid,” one unlicensed jewelry designer told me. “It was never an option to do it any other way. It never even crossed my mind. It was financially absolutely impossible.” The growth of System D opens the market to those who have traditionally been shut out.

This alternative economic system also offers the opportunity for large numbers of people to find work. No job-cutting or outsourcing is going on here. Rather, a street market boasts dozens of entrepreneurs selling similar products and scores of laborers doing essentially the same work. An economist would likely deride all this duplicated work as inefficient. But the level of competition on the street keeps huge numbers of people employed. It liberates their entrepreneurial energy. And it offers them the opportunity to move up in the world. In São Paulo, Édison Ramos Dattora, a migrant from the rural midlands, has succeeded in the nation’s commercial capital by working as a camelô — an unlicensed street vendor. He started out selling candies and chocolates on the trains, and is now in a more lucrative branch of the street trade — retailing pirate DVDs of first-run movies to commuters around downtown. His underground trade — he has to watch out for the cops wherever he goes — has given his family a standard of living he never dreamed possible: a bank account, a credit card, an apartment in the center of town, and enough money to take a trip to Europe. Even in the most difficult and degraded situations, System D merchants are seeking to better their lives. For instance, the garbage dump would be the last place you would expect to be a locus of hope and entrepreneurship. But Lagos scavenger Andrew Saboru has pulled himself out of the trash heap and established himself as a dealer in recycled materials. On his own, with no help from the government or any NGOs or any bank (Andrew has a bank account, but his bank will never loan him money — because his enterprise is unlicensed and unregistered and depends on the unpredictable labor of culling recyclable material from the megacity’s massive garbage pile), he has climbed the career ladder. “Lagos is a city for hustling,” he told me. “If you have an idea and you are serious and willing to work, you can make money here. I believe the future is bright.” It took Andrew 16 years to make his move, but he succeeded, and he’s proud of the business he has created. We should be too. As Joanne Saltzberg, who heads Women Entrepreneurs of Baltimore — a business development group — told me, we need to change our attitude and to salute the achievements of those who are engaged in this alternate economy. “We only revere success,” she said. “I don’t think we honor the struggle. People who have no access to business development resources. People who have to work two and three jobs just to survive. When you are struggling in this economy and still you commit yourself to having a better life, that’s really something to honor.”

How Mexico’s Drug Cartels Stay Networked
by Spencer Ackerman / December 27, 2011

Arranging drug sales on a cellphone, cryptic email or even a pager? That’s strictly for the small-time dealer. If you’re a Mexican drug cartel, you have your own radio network. Since 2006, the cartels have maintained an encrypted DIY radio network that stretches across nearly all 31 Mexican states, even down south into Guatemala. The communications infrastructure of the narco-gangs that have turned Mexico into a gangster’s paradise consists of “professional-grade” radio antennas, signal relays and simple handheld radios that cost “millions of dollars” — and which the Mexican authorities haven’t been able to shut down. If it sounds like a military-grade communications apparatus, it should. The notorious Zetas, formerly the enforcers for the Gulf Cartel and now its chief rival, were born out of Mexican Special Forces. But the Zetas aren’t stupid enough to make big deals over a radio frequency, even an encrypted one. According to a picture of what you might call Radio Zeta that’s emerged after three raids by the Mexican authorities, the bosses only communicate through the Internet. The radio network is for lookouts and lower-level players.

Here’s how it works, according to a fascinating Associated Press piece. The cartels divide up territory into “plazas.” The plaza boss has the responsibility for establishing nodes on the network — getting the antennas in place, concealing them as necessary, making sure the signal-boosting repeaters extend the network’s reach, equipping cartel personnel with handheld radios, and replacing what the security forces destroy. The cartels have even gone green, with solar panels powering the radio towers. The network is primarily an early warning reconnaissance system. “Halcons,” or “hawks,” holler on the handhelds when the federal police or soldiers roll through cartel territory. But it’s also an occasional offensive tool to intimidate the security forces. The cartels have been known to hijack military radio networks to broadcast threats. That’s keeping in line with the Zetas’ alarming tactic of slaughtering people for allegedly talking openly about cartel activity over the Internet.

Since September, three large raids conducted by Mexico’s beleaguered security forces have attempted to disrupt the radio network by snatching up its hardware. But much of the infrastructure — the towers, the receivers — is cheap enough to be easily replaced. The network is “low-cost, highly extendable and maintainable,” a security consultant told the AP. But there’s an alternative for taking down the cartel broadcasts. Since the U.S. already provides intelligence and security assistance to Mexico’s drug war, maybe it’s time to think about providing somemilitary-grade jammers as well. Mexico doesn’t seem to have a better idea for taking Radio Zeta off the air.

{Freenet means controversial information does not need to be stored in physical data havens such as this one, Sealand. Photograph: Kim Gilmour/Alamy}

The dark side of the internet
by Andy Beckett / 25 November 2009

Fourteen years ago, a pasty Irish teenager with a flair for inventions arrived at Edinburgh University to study artificial intelligence and computer science. For his thesis project, Ian Clarke created “a Distributed, Decentralised Information Storage and Retrieval System”, or, as a less precise person might put it, a revolutionary new way for people to use theinternet without detection. By downloading Clarke’s software, which he intended to distribute for free, anyone could chat online, or read or set up a website, or share files, with almost complete anonymity. “It seemed so obvious that that was what the net was supposed to be about – freedom to communicate,” Clarke says now. “But [back then] in the late 90s that simply wasn’t the case. The internet could be monitored more quickly, more comprehensively, more cheaply than more old-fashioned communications systems like the mail.” His pioneering software was intended to change that. His tutors were not bowled over. “I would say the response was a bit lukewarm. They gave me a B. They thought the project was a bit wacky … they said, ‘You didn’t cite enough prior work.'” Undaunted, in 2000 Clarke publicly released his software, now more appealingly called Freenet. Nine years on, he has lost count of how many people are using it: “At least 2m copies have been downloaded from the website, primarily in Europe and the US. The website is blocked in [authoritarian] countries like China so there, people tend to get Freenet from friends.” Last year Clarke produced an improved version: it hides not only the identities of Freenet users but also, in any online environment, the fact that someone is using Freenet at all.

Installing the software takes barely a couple of minutes and requires minimal computer skills. You find the Freenet website, read a few terse instructions, and answer a few questions (“How much security do you need?” … “NORMAL: I live in a relatively free country” or “MAXIMUM: I intend to access information that could get me arrested, imprisoned, or worse”). Then you enter a previously hidden online world. In utilitarian type and bald capsule descriptions, an official Freenet index lists the hundreds of “freesites” available: “Iran News”, “Horny Kate”, “The Terrorist’s Handbook: A practical guide to explosives and other things of interests to terrorists”, “How To Spot A Pedophile [sic]”, “Freenet Warez Portal: The source for pirate copies of books, games, movies, music, software, TV series and more”, “Arson Around With Auntie: A how-to guide on arson attacks for animal rights activists”. There is material written in Russian, Spanish, Dutch, Polish and Italian. There is English-language material from America and Thailand, from Argentina and Japan. There are disconcerting blogs (“Welcome to my first Freenet site. I’m not here because of kiddie porn … [but] I might post some images of naked women”) and legally dubious political revelations. There is all the teeming life of the everyday internet, but rendered a little stranger and more intense. One of the Freenet bloggers sums up the difference: “If you’re reading this now, then you’re on the darkweb.” The modern internet is often thought of as a miracle of openness – its global reach, its outflanking of censors, its seemingly all-seeing search engines. “Many many users think that when they search on Google they’re getting all the web pages,” says Anand Rajaraman, co-founder of Kosmix, one of a new generation of post-Google search engine companies. But Rajaraman knows different. “I think it’s a very small fraction of the deep web which search engines are bringing to the surface. I don’t know, to be honest, what fraction. No one has a really good estimate of how big the deep web is. Five hundred times as big as the surface web is the only estimate I know.”

Unfathomable and mysterious
“The darkweb”; “the deep web”; beneath “the surface web” – the metaphors alone make the internet feel suddenly more unfathomable and mysterious. Other terms circulate among those in the know: “darknet”, “invisible web”, “dark address space”, “murky address space”, “dirty address space”. Not all these phrases mean the same thing. While a “darknet” is an online network such as Freenet that is concealed from non-users, with all the potential for transgressive behaviour that implies, much of “the deep web”, spooky as it sounds, consists of unremarkable consumer and research data that is beyond the reach of search engines. “Dark address space” often refers to internet addresses that, for purely technical reasons, have simply stopped working. And yet, in a sense, they are all part of the same picture: beyond the confines of most people’s online lives, there is a vast other internet out there, used by millions but largely ignored by the media and properly understood by only a few computer scientists. How was it created? What exactly happens in it? And does it represent the future of life online or the past? Michael K Bergman, an American academic and entrepreneur, is one of the foremost authorities on this other internet. In the late 90s he undertook research to try to gauge its scale. “I remember saying to my staff, ‘It’s probably two or three times bigger than the regular web,”‘ he remembers. “But the vastness of the deep web . . . completely took my breath away. We kept turning over rocks and discovering things.” In 2001 he published a paper on the deep web that is still regularly cited today. “The deep web is currently 400 to 550 times larger than the commonly defined world wide web,” he wrote. “The deep web is the fastest growing category of new information on the internet … The value of deep web content is immeasurable … internet searches are searching only 0.03% … of the [total web] pages available.” In the eight years since, use of the internet has been utterly transformed in many ways, but improvements in search technology by Google, Kosmix and others have only begun to plumb the deep web. “A hidden web [search] engine that’s going to have everything – that’s not quite practical,” says Professor Juliana Freire of the University of Utah, who is leading a deep web search project called Deep Peep. “It’s not actually feasible to index the whole deep web. There’s just too much data.”

But sheer scale is not the only problem. “When we’ve crawled [searched] several sites, we’ve gotten blocked,” says Freire. “You can actually come up with ways that make it impossible for anyone [searching] to grab all your data.” Sometimes the motivation is commercial – “people have spent a lot of time and money building, say, a database of used cars for sale, and don’t want you to be able to copy their site”; and sometimes privacy is sought for other reasons. “There’s a well-known crime syndicate called the Russian Business Network (RBN),” says Craig Labovitz, chief scientist at Arbor Networks, a leading online security firm, “and they’re always jumping around the internet, grabbing bits of [disused] address space, sending out millions of spam emails from there, and then quickly disconnecting.” The RBN also rents temporary websites to other criminals for online identity theft, child pornography and releasing computer viruses. The internet has been infamous for such activities for decades; what has been less understood until recently was how the increasingly complex geography of the internet has aided them. “In 2000 dark and murky address space was a bit of a novelty,” says Labovitz. “This is now an entrenched part of the daily life of the internet.” Defunct online companies; technical errors and failures; disputes between internet service providers; abandoned addresses once used by the US military in the earliest days of the internet – all these have left the online landscape scattered with derelict or forgotten properties, perfect for illicit exploitation, sometimes for only a few seconds before they are returned to disuse. How easy is it to take over a dark address? “I don’t think my mother could do it,” says Labovitz. “But it just takes a PC and a connection. The internet has been largely built on trust.”

Open or closed?
In fact, the internet has always been driven as much by a desire for secrecy as a desire for transparency. The network was the joint creation of the US defence department and the American counterculture – the WELL, one of the first and most influential online communities, was a spinoff from hippy bible the Whole Earth Catalog – and both groups had reasons to build hidden or semi-hidden online environments as well as open ones. “Strong encryption developed in parallel with the internet,” says Danny O’Brien, an activist with the Electronic Frontier Foundation, a long-established pressure group for online privacy. There are still secretive parts of the internet where this unlikely alliance between hairy libertarians and the cloak-and-dagger military endures. The Onion Router, or Tor, is an American volunteer-run project that offers free software to those seeking anonymous online communication, like a more respectable version of Freenet. Tor’s users, according to its website, include US secret service “field agents” and “law enforcement officers . . . Tor allows officials to surf questionable websites and services without leaving tell-tale tracks,” but also “activists and whistleblowers”, for example “environmental groups [who] are increasingly falling under surveillance in the US under laws meant to protect against terrorism”. Tor, in short, is used both by the American state and by some of its fiercest opponents. On the hidden internet, political life can be as labyrinthine as in a novel by Thomas Pynchon.

The hollow legs of Sealand
The often furtive, anarchic quality of life online struck some observers decades ago. In 1975, only half a dozen years after the internet was created, the science-fiction author John Brunner wrote of “so many worms and counter-worms loose in the data-net” in his influential novel The Shockwave Rider. By the 80s “data havens”, at first physical then online locations where sensitive computerised information could be concealed, were established in discreet jurisdictions such as Caribbean tax havens. In 2000 an American internet startup called HavenCo set up a much more provocative data haven, in a former second world war sea fort just outside British territorial waters off the Suffolk coast, which since the 60s had housed an eccentric independent “principality” called Sealand. HavenCo announced that it would store any data unless it concerned terrorism or child pornography, on servers built into the hollow legs of Sealand as they extended beneath the waves. A better metaphor for the hidden depths of the internet was hard to imagine. In 2007 the highly successful Swedish filesharing website The Pirate Bay – the downloading of music and films for free being another booming darknet enterprise – announced its intention to buy Sealand. The plan has come to nothing so far, and last year it was reported that HavenCo had ceased operation, but in truth the need for physical data havens is probably diminishing. Services such as Tor and Freenet perform the same function electronically; and in a sense, even the “open” internet, as online privacy-seekers sometimes slightly contemptuously refer to it, has increasingly become a place for concealment: people posting and blogging under pseudonyms, people walling off their online lives from prying eyes on social networking websites. “The more people do everything online, the more there’s going to be bits of your life that you don’t want to be part of your public online persona,” says O’Brien. A spokesman for the Police Central e-crime Unit [PCeU] at the Metropolitan Police points out that many internet secrets hide in plain sight: “A lot of internet criminal activity is on online forums that are not hidden, you just have to know where to find them. Like paedophile websites: people who use them might go to an innocent-looking website with a picture of flowers, click on the 18th flower, arrive on another innocent-looking website, click something there, and so on.” The paedophile ring convicted this autumn and currently awaiting sentence for offences involving Little Ted’s nursery in Plymouth met on Facebook. Such secret criminal networks are not purely a product of the digital age: codes and slang and pathways known only to initiates were granting access to illicit worlds long before the internet. To libertarians such as O’Brien and Clarke the hidden internet, however you define it, is constantly under threat from restrictive governments and corporations. Its freedoms, they say, must be defended absolutely. “Child pornography does exist on Freenet,” says Clarke. “But it exists all over the web, in the post . . . At Freenet we could establish a virus to destroy any child pornography on Freenet – we could implement that technically. But then whoever has the key [to that filtering software] becomes a target. Suddenly we’d start getting served copyright notices; anything suspect on Freenet, we’d get pressure to shut it down. To modify Freenet would be the end of Freenet.”

Always recorded
According to the police, for criminal users of services such as Freenet, the end is coming anyway. The PCeU spokesman says, “The anonymity things, there are ways to get round them, and we do get round them. When you use the internet, something’s always recorded somewhere. It’s a question of identifying who is holding that information.” Don’t the police find their investigations obstructed by the libertarian culture of so much life online? “No, people tend to be co-operative.” The internet, for all its anarchy, is becoming steadily more commercialised; as internet service providers, for example, become larger and more profit-driven, the spokesman suggests, it is increasingly in their interests to accept a degree of policing. “There has been an increasing centralisation,” Ian Clarke acknowledges regretfully. Meanwhile the search engine companies are restlessly looking for paths into the deep web and the other sections of the internet currently denied to them. “There’s a deep implication for privacy,” says Anand Rajaraman of Kosmix. “Tonnes and tonnes of stuff out there on the deep web has what I call security through obscurity. But security through obscurity is actually a false security. You [the average internet user] can’t find something, but the bad guys can find it if they try hard enough.” As Kosmix and other search engines improve, he says, they will make the internet truly transparent: “You will be on the same level playing field as the bad guys.” The internet as a sort of electronic panopticon, everything on it unforgivingly visible and retrievable – suddenly its current murky depths seem in some ways preferable. Ten years ago Tim Berners-Lee, the British computer scientist credited with inventing the web, wrote: “I have a dream for the web in which computers become capable of analysing all the data on the web – the content, links, and transactions between people … A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines.” Yet this “semantic web” remains the stuff of knotty computer science papers rather than a reality. “It’s really been the holy grail for 30 years,” says Bergman. One obstacle, he continues, is that the internet continues to expand in unpredictable and messy surges. “The boundaries of what the web is have become much more blurred. Is Twitter part of the web or part of something else? Now the web, in a sense, is just everything. In 1998, the NEC laboratory at Princeton published a paper on the size of the internet. Who could get something like that published now? You can’t talk about how big the internet is. Because what is the metric?”

Gold Rush
It seems likely that the internet will remain in its Gold Rush phase for some time yet. And in the crevices and corners of its slightly thrown-together structures, darknets and other private online environments will continue to flourish. They can be inspiring places to spend time in, full of dissidents and eccentrics and the internet’s original freewheeling spirit. But a darknet is not always somewhere for the squeamish. On Freenet, there is a currently a “freesite” which makes allegations against supposed paedophiles, complete with names, photographs, extensive details of their lives online, and partial home addresses. In much smaller type underneath runs the disclaimer: “The material contained in this freesite is hearsay . . . It is not admissable in court proceedings and would certainly not reach the burden of proof requirement of a criminal trial.” For the time being, when I’m wandering around online, I may stick to Google.


Smartphone Invader Tracks Your Every Move
Carrier IQ software, installed on more than 141 million mobile phones, tracks GPS location, websites visited, search queries, and all keys pressed.
by Mathew J. Schwartz  /  November 16, 2011

Software on many smartphones is tracking every move and website visited, without the knowledge of the phone’s user. And that information is being collected by a little known company, which could be sharing it with law enforcement agencies without requiring a subpoena and without keeping a record of the query.That’s among the conclusions that can be drawn from the discovery of a rootkit that’s running on a number of Verizon and Sprint phones, which tracks not just phone numbers dialed, but also the user’s GPS coordinates, websites visited, keys pressed, and many website searches, according to security researcher Trevor Eckhart. He discovered the rootkit after tracing suspicious network activity in a data center that he manages, and which he suspected related to a virus infection. But he traced the activity back to software made by Carrier IQ, which describes its “mobile service delivery” software as being a tool for measuring smartphone service quality and usage using software embedded in handsets. “The Carrier IQ solution gives you the unique ability to analyze in detail usage scenarios and fault conditions by type, location, application, and network performance while providing you with a detailed insight into the mobile experience as delivered at the handset rather than simply the state of the network components carrying it,” according to the website.

Carrier IQ software runs on 141 million handsets. In the United States, it ships installed by default on many handsets sold via Sprint and Verizon, and runs on a number of platforms, including Android, BlackBerry, and Nokia smartphones and tablets. Rather than carriers using Carrier IQ software to collect data and then store it themselves, it appears that Carrier IQ handles both the data collection and related analytics. According to the company’sprivacy and security policy, “information transmitted from enabled mobile devices is stored in a secure data center facility that meets or exceeds industry best practice guidelines for security policies and procedures.” The policy doesn’t detail those policies and procedures.

Eckhart said in an interview that the software is often configured by carriers to hide its presence from users. That means it functions per the Wikipedia definition of a rootkit: “Software that enables continued privileged access to a computer while actively hiding its presence from administrators by subverting standard operating system functionality or other applications.” The software, however, doesn’t have to be stealthy. Eckhart said that the default version of Carrier IQ “makes its presence known by putting a checkmark in the status bar,” and can generate surveys if calls get dropped or browsers crash unexpectedly, to help engineers identify the underlying problem. Still, after reviewing public-facing training videos he found online, Eckhart said he was alarmed to see just how much data was being gathered by Carrier IQ, and how easily it could be searched en masse–all of which makes him suspicious about how the data is being used. “If this was just legit use, say monitoring dropped calls, why would all on/off switches be stripped and made completely invisible? Users should always have an option to ‘opt-in’ to a program. There are obviously other uses,” he said. “It is a massive invasion of privacy.”

Carrier IQ makes the information it collects available to its customers via a portal. Eckhart said in a blog post that “from leaked training documents we can see that portal operators can view and [search] metrics by equipment ID, subscriber ID, and more.” As a result, anyone with access to the portal can “know ‘Joe Anyone’s’ location at any given time, what he is running on his device, keys being pressed, applications being used,” he said. Carrier IQ spokeswoman Mira Woods said, “Our customers select which metrics they need to gather based on their business need–such as network planning, customer care, device performance–within the bounds of the agreement they form with their end users. These business rules are translated into a profile, placed on the device which provides instructions on what metrics to actually gather.” She said that all collected data gets transmitted by Carrier IQ to carriers using a “secure encrypted channel,” at which point they typically use it for customer service or analyzing network performance. “The further processing or reuse of this data is subject to the agreement formed between our customer and their end user (of the mobile device) and the applicable laws of the country in which they are operating,” she said.

One concern for privacy advocates, however, is that carriers apparently share information of the type collected by this software freely with law enforcement agencies. Notably, research published by privacy expert Christopher Soghoian in 2009 found that Sprint had shared customers’ GPS location information with law enforcement agencies more than 8 million times over a 13-month period. Sprint had also developed tools to automatically fulfill the large volume of law enforcement agency requests, which seem to occur in a legal gray area that results in none of the requests or shared data queries being recorded. Eckhart said the information being collected by Carrier IQ was even more expansive than what Sprint had shared in 2009. “We can see from the dashboard that GPS data can be viewed historically or in real time by date, time, whatever. That makes for a very efficient law enforcement portal, just like what’s detailed being blatantly abused in Soghoian’s article. It also relates to how Verizon is gathering info for their new ad tracking program,” he said. “Things like exact keypress data being stored as well shows this. What use would what words I’m typing ever be to ‘network performance’? Maybe words per minute would be useful, but it’s not that–it’s an exact record of what you are typing.”

Verizon has publicly acknowledged that it uses Carrier IQ statistics, both for mobile usage information (device location, app and feature usage, and website addresses, which may include search string) as well as consumer information (use of Verizon products and services, as well as demographic information, such as gender, age, and dining preferences). It also offers customers a way to opt out of this usage. Meanwhile, “Sprint is known to collect Carrier IQ data because users have the application running reporting to them, but have no privacy policy, retention policy, or public information on what they use the data for,” said Eckhart. But Sprint spokesman Jason Gertzen said via email that Sprint uses the information for diagnostic purposes. “Carrier IQ provides information that allows us to analyze our network performance and identify where we should be improving service. We also use the data to understand device performance so we can figure out when issues are occurring,” he said. “The information collected is not sold and we don’t provide a direct feed of this data to anyone outside of Sprint.” Deactivating installed Carrier IQ software can be difficult, at least as implemented by many carriers. While Samsung Android devices offer a somewhat hidden Carrier IQ on/off switch, HTC Android devices offer no such feature. Accordingly, if you buy an ex-Sprint phone off of eBay and Carrier IQ software is installed, you’re being tracked, said Eckhart. But Carrier IQ’s Woods said that her company’s software is set to disable data collection if the device’s SIM card or mobile carrier changes.

How can you determine if the software is running on a device? “Logging TestApp scanner will detect it in the kernel–use ‘Check Props’ feature–as well as files used in the regular Loggers scan,” said Eckhart. He’s the developer behind Logging TestApp, which can also be used to reveal the Carrier IQ menus often hidden by carriers when they roll out the application. If Carrier IQ is found and isn’t wanted, deleting it can also be difficult. “The only way to remove Carrier IQ is with advanced skills. If you choose to void your warranty and unlock your bootloader you can (mostly) remove Carrier IQ,” he said. “Logging TestApp can identify files used in logging and you can manually patch or use [the] Pro version to automatically remove [them].”

Android expert Tim Schofield has also released a YouTube video showing how to remove Carrier IQ from the Samsung Epic 4G running Android Gingerbread 2.3.5, but warned that it would require flashing the ROM. “What [Carrier IQ] does is log things you do and send it to Sprint, so it’s like a spyware thing that you don’t want on your phone,” he said.

Samsung screenshots thanks to k0nane on XDA See the full post where he removed carrier IQ here

Carrier IQ defends against Android rootkits accusation
Handset makers and carriers to blame
by Lawrence Latif / Nov 17 2011

MOBILE ANALYTICS OUTFIT Carrier IQ is facing a growing firestorm over its secretive analytics software that is deeply embedded into mobile operating systems such as Google’s Android. Carrier IQ, which claims to provide ‘mobile intelligence’, has been accused of supplying rootkits that track user interactions on smartphones. Carrier IQ’s software is found on many operating systems including Google’s Android and records application runtimes, media playback, location satistics and when calls are received.

An investigation conducted by the smart chaps at XDA-Developers brought Carrier IQ’s activities to light, with the investigators labeling the software as a rootkit. It also found that stopping the service was not a trivial matter, since it’s hidden under several layers of abstraction.
Carrier IQ became aware of the growing backlash against its software and issued a release in which it claimed device manufacturers use its software to “improve the quality of the network, understand device issues and ultimately improve the user experience”. It went on to categorically deny that it was tracking keystrokes or providing tracking tools.
As for the data collected by Carrier IQ’s software, the firm went on to say, “Our customers have stringent policies and obligations on data collection and retention. Each customer is different and our technology is customized to their exacting needs and legal requirements.”

Being fair to Carrier IQ, it is not secretly splicing in tracker-ware into its products like Sony did, rather carriers and handset makers are opting to include the software without informing users. The handset makers should be questioned as to their motives for including such software and asked to provide detailed documents listing what they collect, what they do with the information and how long the information is stored. Whatever the reason for including Carrier IQ’s software, the facts are that users were unaware of it and it is engineered to be extremely difficult to remove. Those facts alone are enough to warrant serious concern.

Carrier IQ markets its software as a "mobile service intelligence solution" on its Web site. "We give wireless carriers and handset manufacturers unprecedented insight into their customers' mobile experience."

Carrier IQ markets its software as a “mobile service intelligence solution” on its Web site. “We give wireless carriers and handset manufacturers unprecedented insight into their customers’ mobile experience.” (Credit: Carrier IQ)


by Elinor Mills /  November 17, 2011

Android developer Trevor Eckhart recently noticed something odd on several EVO HTC devices: hidden software that phoned home to the carrier with details about how the phone was being used and where it was. The software, Carrier IQ, tracked the location of the phone, what keys were pressed, which Web pages were visited, when calls were placed, and other information on how the device is used and when.

Eckhart discovered that Carrier IQ can be shown as present on the phone to users or configured as hidden, which was the case on the HTC phones he analyzed. And he found what he described as “leaked training documents” that indicate that carriers can view customer usage information via a remote portal that displays devices by equipment ID and subscriber ID. “The only way to remove Carrier IQ is with advanced skills,” Eckhart wrote in a report,published on the Web on Monday. “If you choose to void your warranty and unlock your bootloader you can (mostly) remove Carrier IQ.” Sprint, meanwhile, “has no privacy policy, retention policy, or public information on what they use the data for,” Eckhart wrote.

HTC Android devices have no on-off switch for Carrier IQ, while Samsung devices do, but it is not easily accessible or pointed out to users, he said. Because customers do not give explicit permission for this data collection and don’t even know this software is on their phones, and they can’t opt out of it, Eckhart says it is a clear privacy violation. He likens Carrier IQ to malware. “Carrier IQ is rootkit software,” he wrote in his report. “It listens on the phones for commands contained in ‘tasking profiles’ sent a number of ways and returns whatever ‘metric’ was asked for.”

According to Wikipedia, a rootkit is software “that enables continued privileged access to a computer while actively hiding its presence from administrators by subverting standard operating system functionality or other applications.” Typically, hackers install a rootkit onto a target system by exploiting a software vulnerability or using a stolen password. They are characterized by stealth and malicious purpose. Definitions aside, the types of data gathered is enough to set off alarms for privacy minded folk. “If it’s just for ‘network performance’ why wouldn’t they give users a choice?” Eckhart said in an e-mail to CNET late last night. “Any program logging this extent of personal information should always be opt-in.”

A Sprint spokesman provided a general statement about the use of Carrier IQ, but did not provide comment to follow-up questions about whether customers know about the data collection and why they can’t opt out. Here is the Sprint statement:

“Carrier IQ provides information that allows Sprint, and other carriers that use it, to analyze our network performance and identify where we should be improving service. We also use the data to understand device performance so we can figure out when issues are occurring. We collect enough information to understand the customer experience with devices on our network and how to address any connection problems, but we do not and cannot look at the contents of messages, photos, videos, etc., using this tool. The information collected is not sold and we don’t provide a direct feed of this data to anyone outside of Sprint.

Sprint maintains a serious commitment to respecting and protecting the privacy and security of each customer’s personally identifiable information and other customer data. A key element of this involves communicating with our customers about our information privacy practices. The Sprint privacy policy makes it clear we collect information that includes how a device is functioning and how it is being used. Carrier IQ is an integral part of the Sprint service.”

Carrier IQ representatives said the data carriers collect with their software has a legitimate purpose and is handled responsibly. “We are collecting information that would be regarded by most people as sensitive,” Andrew Coward, vice president of marketing for Carrier IQ, told CNET today. “So we work within the network of the operator or in the facilities [they approve] and which are up to their standards as far as data retention” and encryption.

Mountain View, Calif.-based Carrier IQ launched six years ago expressly to offer software that serves as an “embedded performance management tool,” he said. “This has caught us off guard in that the technology has been around a long time,” he added. “We’re in the business of counting things that happen on the phone to help carriers improve service.” For example, knowing exactly where a phone call was dropped can help a carrier identify network troubles in a geographic location. “We do want to know when you’ve had a dropped call, if an SMS didn’t work and if you’ve got battery life problems,” Coward said.

Information on keys that are pressed and how many times the phone is charged can provide activity information over the life of a phone, which is important for device manufacturers, he said. “We are not interested and do not gather the text or the text message and do not have the capacity to do that,” he said. Processing specific data like that from millions of devices would be impractical to do, he said. In addition, the data logged is not real-time in Carrier IQ, which diminishes its usefulness, and carriers have other ways of getting sensitive user data if they want, according to Coward. “You can’t make a phone call on the mobile network without them knowing where you are,” he said. “Our customers believe that they have obtained permission from their customers to gather this performance data.”

But Eckhart questioned the legality of carriers collecting keypresses and some of the other information. “As far as Sprint, the data they are logging is very personal,” he said in his e-mail. “How do we know who is getting this? Every customer service personnel? Law enforcement? Is my location and browsing history stored forever?”

It’s unclear what devices have Carrier IQ software installed. Coward said Carrier IQ is used by more than a dozen device manufacturers, including smartphones and tablets, but he declined to name the companies or devices. Eckhart names HTC, Samsung, Nokia, BlackBerry, Sprint, and Verizon in his report on Carrier IQ. HTC did not respond to requests for comment and a Samsung representative said she would try to get comment. But a Verizon representative said the company does not use Carrier IQ on its devices and Coward confirmed that. (Eckhart’s report linked to this Verizon Web page that talks about collecting data on phone location, Web sites visited and other information.) Eckhart did not immediately respond to e-mails and phone calls seeking a follow-up interview today. In the paranoid world of security researchers, the notion of privacy is nine-tenths perception and potential. Carriers should make it clear what data they are collecting and what benefit doing so provides to the customers. And, if possible, it should be opt in.


Responding to the US Senate request lead by Senator Al Franken, AT&T, Sprint, HTC, and Samsung have sent the list of all the phones with Carrier IQ spyware installed in them.

The carriers have also admitted that Carrier IQ also captured the content of text messages “under certain conditions.”

Here’s the complete list:

AT&T claims about 900,000 users using phones with Carrier IQ. The software is active on eleven AT&T wireless consumer devices:

• Motorola Atrix 2
• Motorola Bravo
• Pantech Pursuit II
• Pantech Breeze 3
• Pantech P5000 (Link 2)
• Pantech Pocket
• Sierra Wireless Shockwave
• LG Thrill
• ZTE Avail
• ZTE Z331
• SEMC Xperia Play

It’s also installed but not active “due to the potential for the software agent to interfere with the performance” of the following phones:

• HTC Vivid
• LG Nitro
• Samsung Skyrocket

Carrier IQ is also packaged in the free AT&T Mark the Spot application, available for Android and RIM.

26 million active Sprint devices have the Carrier IQ software installed, says Sprint. That’s almost half of all their subscribers, 53.4 million customers, so you can assume that they have it installed in all the Android phones of the manufacturers Sprint reported to the US senate:

• Audiovox
• Franklin
• Huawei
• Kyocera
• LG
• Motorola
• Novatel
• Palmone
• Samsung
• Sanyo
• Sierra Wireless

Samsung claims 25 million phones affected. It has directly installed Carrier IQ at the factory in the following models:

• SPH-M800 (Samsung Instinct)
• SPH-M540 (Samsung Rant)
• SPH-M630 (Samsung Highnote)
• SPH-M810 (Samsung Instinct s30)
• SPH-M550 (Samsung Exclaim)
• SPH-M560 (Samsung Reclaim)
• SPH-M850 (Samsung Instinct HD)
• SPH-I350 (Samsung Intrepid)
• SPH-M900 (Samsung Moment)
• SPH-M350 (Samsung Seek)
• SPH-M570 (Samsung Restore)
• SPH-D700 (Samsung Epic 4G)
• SPH-M910 (Samsung Intercept)
• SPH-M920 (Samsung Transform)
• SPH-M260 (Samsung Factor)
• SPH-M380 (Samsung Trender)
• SPH-M820 (Samsung Galaxy Prevail)
• SPH-M580 (Samsung Replenish)
• SPH-D600 (Samsung Conquer 4G)
• SPH-M930 (Samsung Transform Ultra)
• SPH-D710 (Samsung Epic 4G Touch)
• SPH-M220
• SPH-M240
• SPH-M320
• SPH-M330
• SPH-M360
• SPH-P100
• SPH-Z400

•T989 (Samsung Hercules)
•T679 (Samsung Galaxy W)

• SCH-R500 (Samsung Hue)
• SCH-R631 (Samsung Messager Touch)
• SCH-R261 (Samsung Chrono)
• SCH-R380 (Samsung Freeform III)

• SGH-i727 (Samsung Galaxy S II Skyrocket)

HTC preinstalled Carrier IQ spyware on about 6.3 million Android phones:

• Snap
• Touch Pro 2
• Hero
• EVO 4G
• EVO Shift 4G
• EVO Design

• Amaze 4G

• Vivid

What is Carrier IQ?
Carrier IQ logs information about your whereabouts as well as other personal data such as browsing history, application usage and phone numbers.

The Carrier IQ application also captures the content of your text messages, according to AT&T. This happens when you are talking on the phone and you sned or receive a text message: “the CIQ software also captured the content of SMS text messages—when and only when—such messages were sent or received while a voice call was in progress.” [US Senator Al Franken’s responseAT&T Response (PDF)Sprint Response (PDF)Samsung Response (PDF)HTC Response (PDF)CarrierIQ response (PDF), via Verge and Business Week]


“The NYPD began taking pictures of suspects’ irises on Monday. The new program, which started in Manhattan and will expand to other boroughs by next month, is designed to prevent suspects from disguising their identities. The technology allows police to match a prisoner to his or her iris in as little as 5 seconds. Police said the move was prompted by a recent case in which a felon passed himself as a lesser offender and walked out of the courthouse. Police said the eye shots will not be kept on file if the charges are dismissed or if the case is sealed. “They’re being treated as other cases would be,” said Deputy Commissioner Paul Browne, the NYPD’s top spokesman.”–108321624.html

“Along with fingerprints and mug shots, the New York City Police Department is now taking photographs of the irises of crime suspects. The NYPD says the images will be used to help avoid cases of mistaken identity. The process takes about five minutes. Every suspect will be scanned again using a handheld device shortly before they are arraigned to make sure the irises match. Police say the software, handheld device and cameras cost about $23,800 each, and 21 systems will be used around the city. Central booking in Manhattan started taking photos Monday. The devices will be in use in Brooklyn and the Bronx in the upcoming weeks, and later in Staten Island and Queens.”

We’ve all seen and obsessively referenced Minority Report, Steven Spielberg’s adaptation of Philip K. Dick’s dystopian future, where the public is tracked everywhere they go, from shopping malls to work to mass transit to the privacy of their own homes. The technology is here. I’ve seen it myself. It’s seen me, too, and scanned my irises.

Biometrics R&D firm Global Rainmakers Inc. (GRI) announced today that it is rolling out its iris scanning technology to create what it calls “the most secure city in the world.” In a partnership with Leon — one of the largest cities in Mexico, with a population of more than a million — GRI will fill the city with eye-scanners. That will help law enforcement revolutionize the way we live — not to mention marketers.

“In the future, whether it’s entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris,” says Jeff Carter, CDO of Global Rainmakers. Before coming to GRI, Carter headed a think tank partnership between Bank of America, Harvard, and MIT. “Every person, place, and thing on this planet will be connected [to the iris system] within the next 10 years,” he says.

Leon is the first step. To implement the system, the city is creating a database of irises. Criminals will automatically be enrolled, their irises scanned once convicted. Law-abiding citizens will have the option to opt-in.

When these residents catch a train or bus, or take out money from an ATM, they will scan their irises, rather than swiping a metro or bank card. Police officers will monitor these scans and track the movements of watch-listed individuals. “Fraud, which is a $50 billion problem, will be completely eradicated,” says Carter. Not even the “dead eyeballs” seen in Minority Report could trick the system, he says. “If you’ve been convicted of a crime, in essence, this will act as a digital scarlet letter. If you’re a known shoplifter, for example, you won’t be able to go into a store without being flagged. For others, boarding a plane will be impossible.”

GRI’s scanning devices are currently shipping to the city, where integration will begin with law enforcement facilities, security check-points, police stations, and detention areas. This first phase will cost less than $5 million. Phase II, which will roll out in the next three years, will focus more on commercial enterprises. Scanners will be placed in mass transit, medical centers and banks, among other public and private locations.

The devices range from large-scale scanners like the Hbox (shown in the airport-security prototype above), which can snap up to 50 people per minute in motion, to smaller scanners like the EyeSwipe and EyeSwipe Mini, which can capture the irises of between 15 to 30 people per minute.

I tested these devices at GRI’s R&D facilities in New York City last week. It took less than a second for my irises to be scanned and registered in the company’s database. Every time I went through the scanners after that–even when running through (because everybody runs, right, Tom Cruise?) my eyes were scanned and identified correctly. (You can see me getting scanned on the Hbox in the video below. “Welcome Austin,” the robotic voice chimes.)

For such a Big Brother-esque system, why would any law-abiding resident ever volunteer to scan their irises into a public database, and sacrifice their privacy? GRI hopes that the immediate value the system creates will alleviate any concern. “There’s a lot of convenience to this–you’ll have nothing to carry except your eyes,” says Carter, claiming that consumers will no longer be carded at bars and liquor stores. And he has a warning for those thinking of opting out: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.”

This vision of the future eerily matches Minority Report, and GRI knows it. “Minority Report is one possible outcome,” admits Carter. “I don’t think that’s our company’s aim, but I think what we’re going to see is an enviroment well beyond what you see in that movie–minus the precogs, of course.”

When I asked Carter whether he felt the film was intended as a dystopian view of the future of privacy, he pointed out that much of our private life is already tracked by telecoms and banks, not to mention Facebook. “The banks already know more about what we do in our daily life–they know what we eat, where we go, what we purchase–our deepest secrets,” he says. “We’re not talking about anything different here–just a system that’s good for all of us.”

One potential benefit? Carter believes the system could be used to intermittently scan truck drivers on highways to make sure they haven’t been on the road for too long.

GRI also predicts that iris scanners will help marketers. “Digital signage,” for example, could enable advertisers to track behavior and emotion. “In ten years, you may just have one sensor that is literally able to identify hundreds of people in motion at a distance and determine their geo-location and their intent–you’ll be able to see how many eyeballs looked at a billboard,” Carter says. “You can start to track from the point a person is browsing on Google and finds something they want to purchase, to the point they cross the threshold in a Target or Walmart and actually make the purchase. You start to see the entire life cycle of marketing.”

So will we live the future under iris scanners and constant Big Brother monitoring? According to Carter, eye scanners will soon be so cost-effective–between $50-$100 each–that in the not-too-distant future we’ll have “billions and billions of sensors” across the globe.

Goodbye 2010. Hello 1984.

Rutgers team proposes Net alternative
by Rick Merritt / 4/28/2011

San Jose, Calif. – A team of researchers at Rutgers University have launched the latest of a group of wireless network initiatives aiming to create a more open alternative to the Internet. MondoNet aims to enable a mesh network that lets a hybrid collection of new and existing Wi-Fi, WiMax and other wireless devices connect to each other without going through a central carrier. A draft proposal for MondoNet describes its premise as well as how it will gather the best of existing technologies for mobile ad-hoc wireless mesh networks (MANETs). The project’s goal to create a system that provides both greater freedom and privacy for individual users than today’s Web. Aram Sinnreich, organizer of MondoNet and an associate professor at Rutgers, outlined the proposal in a recent video. Today’s Web is subject to censorship and manipulation due to close links between a handful of carriers and their governments, he said, citing examples in China, Egypt and the U.S. “All the information [on the Internet] has to go through the eye of the needle of a few companies beholden to their governments,” Sinnreich said. “What we need is a new network,” he said. MondoNet aims to be as fast and feature-rich as the Net while being more immune to censorship and spying. Researchers hope to get funding to create a prototype of their concept in the Rutgers area. Many legal and technical challenges remain open, the researchers said. For example, they propose use of tcpcrypt for security, although they admit it is not immune to malicious attacks. It’s also not clear how MANETs will get permission to let users act as broadcasters or what form of licensing MondoNet will use for its software.

The effort aims to adopt techniques from a number of other pioneering efforts in MANETs including:

  • BATMAN: A Better Approach to Mobile ad-hoc Networks launched earlier this year.
  • Babel: A distance vector routing protocol
  • Daihinia: A tool to turn Wi-Fi devices into a mesh network.
  • Freedom Box: A simplified Linux server for distributed networks
  • GNUnet: A software framework for secure peer-to-peer networking

Aram Sinnreich
email :sinn [at] rutgers [dot] edu

Weaving a New ‘Net: A Mesh-Based Solution
for Democratizing Networked Communications
by Aram Sinnreich, Nathan Graham, & Aaron Trammell / Rutgers University

Recent developments, from the mass release of sensitive diplomatic cables by Wikileaks to the social media–fueled revolutions and protests currently gripping the Middle East and North Africa, have underscored the increasingly vital role of information and communication technologies (ICTs) in geopolitical affairs. Further, a wealth of recent research demonstrates the growing importance of digital networks in fostering cultural innovation and a vibrant public sphere, and the increasing centrality of these technologies to the daily lives of billions of individuals across the globe. Given the centrality of ICTs to these emerging changes in our social, cultural, and political landscapes, and the oft-invoked observation that “code is law,” it is essential that we develop and maintain a communications infrastructure that will enable individuals and communities (especially those in danger of political repression) to participate and contribute fully and actively to the public sphere, and to communicate confidently in private. Unfortunately, today’s infrastructure is not fully adequate to achieve this end. As U.S. Secretary of State Hillary Rodham Clinton recently observed, “the internet continues to be constricted in myriad ways worldwide.” While this is certainly the case in repressive political regimes from China to Iran, we face significant obstacles to “internet freedom” in America, as well. Although the internet is highly decentralized in its communication and social patterns, its technical and regulatory foundations are extremely hierarchical, due to centralized control by organizations like ICANN and oligopolistic ownership of network access. As a result of this centralization, digital communications are vulnerable to a degree of surveillance and censorship that would be unthinkable in traditional social arenas, threatening free speech and cyberliberties. Many laws and regulations exploit, rather than ameliorate this threat. Seemingly disparate factors like tiered access, intellectual property laws and national security measures, taken in combination, threaten to produce a communications environment in which cultural innovation is stifled, normative behaviors are criminalized, and political dissidence is dangerous or impossible. We believe that a new architecture is required in order to protect the continuance of civil liberties in networked society. In this article, we propose 10 “social specifications” describing the requirements of such an architecture, and outline a project called MondoNet designed to meet these specifications using ad hoc, wireless mesh networking technologies. We also address the legal and technical challenges facing the MondoNet project, and anticipate future developments in this field.

Weaving a New ‘Net: A Mesh-Based Solution
for Democratizing Networked Communications

On February 15, 2011, U.S. Secretary of State Hillary Rodham Clinton gave a speech entitled “Internet Rights and Wrongs: Choices & Challenges in a Networked World,” in which she reaffirmed America’s commitment to “internet freedom” as an increasingly vital element of our foreign policy (Clinton, 2011). In her words, internet freedom is “about ensuring that the internet remains a space where activities of all kinds can take place, from grand, ground-breaking, historic campaigns to the small, ordinary acts that people engage in every day.” Or, to put it simply, the internet is essential to the exercise of free speech and civil liberties in networked society. Recent political developments around the world appear to support this argument. Although the internet has been a platform for political speech and social action virtually since its inception (Rheingold, 1993), digital communications platforms have become an increasingly central component of resistance movements and other organized social action over the past five years, and consequently an increasingly popular target for repression, censorship, and surveillance. As Secretary Clinton herself observed, social and mobile media were important tools for both organizing and publicizing the massive antiregime protests in Iran in 2009 and Egypt in 2011, leading to government-imposed internet shutdowns in both cases, and contributing to the eventual ouster of Egyptian President Hosni Mubarak. The complete list of relevant examples is far longer; in countries ranging from China to Tunisia to Myanmar, political resistance and repression have moved from streets and cafes to mobile phones and laptops, and governments have devoted an ever greater number of resources to controlling and policing the flow of digital communications within and without their borders. In addition to its role in political struggle and change, the internet has also become central to the social, economic, and creative lives of billions of people around the world. A wealth of recent research (e.g., Deuze, 2006; Benkler, 2006; Coté & Pybus, 2007; Sinnreich, 2010; Baym, 2010) illustrates the growing importance of information and communication technologies (ICTs) in fostering cultural innovation, emerging markets, and a vibrant public sphere. Unfortunately, the challenges posed to online political speech and cultural innovation don’t end at America’s borders. Despite Secretary Clinton’s assertion that “on the spectrum of internet freedom, we place ourselves on the side of openness,” American citizens face numerous threats to free speech and civil liberties online, from both governmental and commercial institutions.

Infrastructure, Access, and Speech
We cannot understand the operation of the internet without first understanding the commercial interests of the private companies that provide its infrastructure, and control access to that infrastructure (deNardis, 2010). There is almost a complete lack of competition between these companies; at present, 97 percent of American consumers are forced to chose between at most two broadband providers (Turner, 2009). As Lawrence E. Strickling (2010), administrator of the National Telecommunications and Information Administration (NTIA), recently argued, “Broadband service providers have an incentive to use their control . . . to advantage their value-added services or to disadvantage competitive alternatives. In the absence of robust broadband competition, those providers may be able profitably to act on those incentives to the detriment of consumers and competition.” Consumers face a similar lack of choice in the wireless data market, an arena in which federal regulators possess even less power to exercise oversight.1 This lack of competition and effective regulation gives broadband and wireless providers a great deal of unchecked market power, which they have used, and have an incentive to continue using, in ways that undermine the ability of their customers to freely exchange information. In practice, we have already seen several instances of service providers exploiting this power to block communications for ideological, rather than purely profit-driven, motives. AT&T, for instance, has been criticized for censoring speech critical of President Bush during a live webcast (Marra, 2007). Similarly, Verizon Wireless has blocked text messages from NARAL, a pro-choice political group (Liptak, 2007). The consolidation of the Internet access business raises political concerns beyond these anticompetitive implications. It also contributes to an environment in which free speech is constrained by the federal government itself. One notable example is the NSA electronic surveillance program, a massive federal initiative to eavesdrop on the private communications of American citizens in the wake of the September 11, 2001 terrorist attacks. This program, which violated federal laws (ACLU, 2008), was only possible because the NSA was able to monitor the majority of communications by compelling a relatively small number of oligopolists to participate, presumably using federal regulatory power as leverage.

1 At the time of writing, AT&T has just announced its plans to acquire T-Mobile, potentially bringing the number of major American wireless data service providers from 4 to 3.

Of course, most governmental threats to free speech online come from laws, treaties, and policies that have been introduced and/or ratified by Congress. Although this is not the place for an exhaustive survey, a short list of troubling examples includes the revised Foreign Intelligence Surveillance Act (FISA), the Stored Communications Act (SCA), the Anti-Counterfeiting Trade Agreement (ACTA), the Combating Online Trade Agreements and Copyrights Act (COICA), and the as-yet-unnamed “backdoor bill,” a law requested by the White House that would give the Department of Justice unilateral power to compel ISPs to censor entire domains from the American public. Understood collectively, these examples indicate that the emerging legislative consensus accords “e-speech” less protection than traditional channels and forums (Sinnreich & Zager, 2008). In addition to these concerns, Zittrain (2009) and Moglen (2010) have pointed to the ways in which the emerging “cloud” architecture also undermines democratic and participatory communications. The consolidation of capital and information within a set of centralized corporate servers leads to the complete disempowerment of the user, to a point where ownership of all networked data skews away from local computers toward a set of centralized, corporate-owned servers. In Moglen’s words, the “dis-empowered client [is] at the edge and the server in the middle. [Information was stored] far from the human beings who controlled, or thought they controlled, the operation of the computers that increasingly dominated their lives. This was a recipe for disaster.”

Resistance and Reinvention
The constraints on free speech and civil liberties we have mentioned have met with various forms of resistance over the years. From the beginning, as Turner (2006) relates, the internet’s military and hegemonic origins have been recast as an opportunity for democratic, or even utopian, sociopolitical action. From John Perry Barlow’s seminal 1996 manifesto, “A Declaration of the Independence of Cyberspace,” to today’s position papers and legal interventions by groups like Free Press, Electronic Privacy Information Center, and Electronic Frontier Foundation (a group Barlow cofounded), there has been a consistent effort to define and preserve online free speech and civil liberties, and to develop an ethical and legal framework surrounding these issues. Similarly, we may understand the emergence of networked participatory culture (Banks & Humphreys, 2008), convergence culture (Jenkins, 2006), and configurable culture (Sinnreich, 2010), and the mass adoption of alternative communication protocols like peer-to-peer file sharing, as a largely nonideological form of resistance against the monopolization and privatization of communication. Although an individual mash-up or remix may not be positioned as a challenge to copyright law (or even produced with effective understanding of such laws), for instance, the collective interest in producing and sharing these emerging cultural forms by the billions indicates an emerging set of norms at odds with the increasingly draconian conditions under which cultural expression may legally occur. However, despite the prevalence and effectiveness of these forms of resistance, which are positioned in opposition to cultural regulation through commercial and legal means, the threats to civil liberties and free speech we have identified can ultimately be attributed to a network architecture that lends itself to exploitative and hegemonic ends. As Lessig (1999) has written so concisely, “code is law.” And, despite our ambitions of “internet freedom,” and the wealth of democratized cultural forms and breadth of political opinions currently flowering online, we believe these freedoms will continue to be undermined by a network architecture that fundamentally privileges centralized control over collective deliberation. If power corrupts, as Lord Acton’s oft-quoted phrase suggests, then absolute power over global communications will inevitably corrupt the public sphere and undermine the democratic process. Thus, we propose that the best way to vouchsafe civil liberties in the networked age is through an architectural intervention. The internet’s infrastructure must be fundamentally reimagined if it is to serve as an effective platform for democracy. Though the benefits of hierarchical DNS regimes and long-distance terrestrial backbone infrastructure are clear from an engineering standpoint, they also may be at odds with the same political values they were ostensibly built to serve (Mueller, 2002). Not only does the current architecture place the United States in an exceptional and politically unsustainable role as global regulator, it also allows for the interests of consolidated capital to be furthered above all else. In the interest of promoting civil liberties in a democratic society, our network architecture must encourage free, unregulated speech (Balkin, 2004, p. 49). To guide ourselves and others in understanding what a reimagined networked architecture would require if free speech and civil liberties are to be prioritized above all other considerations, we have developed a set of 10 “social specifications.” Our hope is that these may be understood as fundamental principles informing the development and deployment of next-generation networking technologies. Below, we will describe our own solution to these challenges, in the form of an ad hoc, wireless mesh network called MondoNet.

10 Social Specifications for a Democratized Network

1. Decentralized
The network should not be operated, maintained, or in any way reliant upon a single or minimally differentiated set of entities or technologies. No individual, entity, or group should be central to the network to the extent that its absence would measurably impact the network’s functionality or scope. Network participation should not require access to fixed, physical infrastructure of any sort.

2. Universally Accessible
The requisite technology and expertise required to participate in the network should be available at minimal cost and effort to every human being on the planet. Furthermore, all users should be able to extend the network’s content and functionality to suit their own needs, or those of others. No aspect of the network’s functioning should be reliant upon proprietary technologies, information, or capital.

3. Censor-Proof
The network should be resistant to both regulatory and technical attempts to limit the nature of the information shared, restrict usage by given individuals or communities, or render the network, or any portion of it, inoperable or inaccessible.

4. Surveillance-Proof
The network should enable users to choose exactly what information they share with whom, and to participate anonymously if they so desire. Users should only have access to information if they are the designated recipients, or if it has been published openly.

5. Secure
The network should be organized in a way that minimizes the risk of malicious attacks or engineering failure. Information exchanged on the network should meet or exceed the delivery rate and reliability of information exchanged via the internet.

6. Scalable
The network should be organized with the expectation that its scale could reach or even exceed that of today’s internet. Special care should be taken to address to the challenge of maintaining efficiency without the presence of a centralized backbone.

7. Permanent
The network’s density and redundancy should be great enough that it will operate persistently on a broad scale, and be available in full to any user within range of another user.

8. Fast (Enough)
The network should always achieve whatever speed is required for a “bottom-line” level of social and cultural participation. At present, we assert that the network’s data transfer rate should, at a minimum, be enough for voice-over-IP (VoIP) communications, and low-bitrate streaming video.

9. Independent
While the network will have the capacity to exchange information with internet users and nodes, it should also be able to operate independently. A large-scale failure or closure of internet infrastructure and content should have minimal effect on the network’s operations.

10. Evolvable
The network should be built with future development in mind. The platform should be flexible enough to support technologies, protocols, and modes of usage that have not yet been developed.

Our Solution: MondoNet
There are undoubtedly several potential technological routes to address the social specifications outlined above, and as network technology continues to evolve, we are certain that additional solutions will arise. Given today’s technological and social landscape, we believe the most promising approach is the development of a mobile, ad hoc wireless mesh network (sometimes abbreviated MANET; Rheingold, 2002). In a MANET, users connect directly to one another via WiFi or a similar wireless networking protocol, and each device becomes client, server, and router at once, sharing bandwidth and information with other devices, and enabling users to relay third-party information on behalf of their indirectly connected peers. Such a network requires no centralized infrastructure or access service provider; to join the mesh, one simply logs on within range of another peer, and to exit the network, one simply logs off. Ideally, the number and density of peers should be great enough that the network persists despite the continuing entrance and exit of individual nodes.

MANET technology is not a silver bullet to address the challenges and specifications we outlined above, and none of the existing initiatives yet fit the bill (a subject we will address in greater depth below). In order to meet our social specifications, the network, which we call MondoNet, would need to be conscientiously designed and adapted with these challenges in mind. Given the limitations of today’s networking technologies, no MANET can be universally accessible or completely decentralized at launch. Because of the geographical proximity required for participation, MondoNet would need to develop first within local communities. Over time, those communities would themselves become regionally interconnected, and, ultimately, global (or at least intracontinental) networks could be established. (Other network-based technologies such as the telephone have grown in similar fashion [Wu, 2010]) The first phase of MondoNet’s rollout would require access to internet points-of-presence to bring users in contact with one another, and to provide them with access to content and services that are currently located exclusively on internet servers. This reliance on the internet as a prosthetic network undermines MondoNet’s independence and security by bringing network data back into the range of ISP and wireless carrier scrutiny and control. However, as MondoNet grows in size and coverage, we envision a greater number of content and service providers hosting their data on MondoNet peers, rendering traditional internet access increasingly unnecessary. Over the long term, MondoNet should operate as a completely detached network, independent of the internet. Security, censorship, and surveillance are additional challenges. In a normal ad hoc network, malicious peers can obtain sensitive information simply by joining the network and capturing the data they route on behalf of third parties. We envision MondoNet as a natively encrypted platform, in which security, rather than openness, is the default status of all data. By leveraging existing, open-source encryption protocols, all intrapersonal communications would be accessible only by intended participants. For communications and network publications intended to be globally accessible, we can encrypt information with “everybody” as a recipient and integrate the public key for decoding this data into the platform, so that the user experience will be identical to today’s experience of viewing unencrypted data via the internet. Another censorship threat comes in the form of locatability; in repressive regimes, the operation of a MondoNet peer may be seen as a punishable offense. Once the signal is traced to a given device, it may be deactivated, and the operator may be liable. Although there is an inherent risk in participating in any prohibited network, we believe there is greater safety in numbers. We hope to ameliorate the risk of participation, and boost the stability and efficiency of MondoNet, by introducing “repeater” peers into the network, which would function independently of human operators. Consisting of little more than a small antenna, a power supply, and a tiny flash memory chip, repeater peers could be produced cheaply in large quantity and distributed throughout MondoNet’s geographical coverage areas, hidden within public and private spaces (e.g., attached to street signs and automobiles with magnets, buried in trash cans, stashed behind inventory on store shelves). This would drastically decrease the ability of censors to identify human operators, and increase the density, speed, and permanence of network coverage. An additional challenge we must address is access. All formats, standards, and documentation associated with MondoNet must be freely licensed or in the public domain, to ensure that (1) the technology cannot be monopolized or rendered inoperable by any given party; (2) development of the platform is accessible to all users, and will remain so for perpetuity; and (3) the cost to gain access to MondoNet software and services is as close to zero as possible.

While open-source hardware will also help lower cost of access, we see an even greater opportunity in repurposing consumers’ existing mobile devices for inclusion in MondoNet. Throughout the developed world, it is common for consumers to upgrade their mobile phones and entertainment devices every 18–24 months, discarding previous-generation models or relegating them to a back closet. By downloading an easy-to-install firmware upgrade to devices such as smartphones, portable media players, and tablets, users should be able to access MondoNet with hardware they already own but have no current use for. The immediate incentive will be access to voice and video communications, peer-to-peer file sharing (P2P), and other valuable network services without the cost or liability associated with accessing the same services over the internet. What this means for MondoNet as a whole is potential installation on hundreds of millions of devices already in the hands or homes of users. There are further challenges to the successful deployment of MondoNet. The internet benefits immensely from a centralized governance and organizational structure, in terms of network efficiency, security, and operations. IP number assignment, DNS, and protocol compatibility are just a few of the issues that will be difficult to implement without a central authority. However, we do not view these challenges as insurmountable, and other interested parties are already working to address them. For instance, Pirate Bay developer Peter Sunde recently announced a P2P DNS initiative, which would theoretically address some of these challenges. Below, we will explore this and other platforms, technologies and initiatives that we believe MondoNet can emulate, partner with, adopt, or otherwise learn from.

Technical Considerations
A MANET that would conform to the social specifications outlined earlier faces numerous well-documented engineering challenges, but we are optimistic about its potential for success. In MondoNet, we are proposing a peer-based architecture for data transmission that breaks from the server/client model that has come to dominate popular conceptualizations of the internet (Moglen, 2010; Schollmeier, 2002). With current advances in battery life, mobile routing protocols, WiFi (802.11), WiMAX (802.16), encryption techniques, and human-centered design, MANETs could emerge as a viable alternative to current hierarchical systems. In addition, several high-profile projects, such as Freedom Box, Open Mesh, One Laptop Per Child (OLPC), the Serval Project, and a Better Approach to Mobile Ad Hoc Networking (B.A.T.M.A.N.), have proposals that could fortify the initiatives set forth by MondoNet by providing specialized routing protocols, persistent stationary nodes, and security measures, in addition to piquing user awareness of and interest in mesh networking. For security, overlay encryption built upon existing TCP/IP architecture is one popular and viable solution, although encryption and key verification introduce a heavy traffic load and bandwidth restraints to nodes that often have limited battery and computational power (Mamatha & Sharma, 2010, p. 276). Recent testing has demonstrated the success of a distributed security scheme, which utilizes multiple strategies at the network and link layers to efficiently reduce network security vulnerabilities (Gada et al., 2004; Khokhar, Ngadi & Mandala, 2008; Dhanalakshmi & Rajaram, 2008). A powerful layer-two encryption with a distributed scheme would put all users on the same broadcast domain while reducing traffic load. However, in order for anonymous users to connect and push data across heterogeneous networks, layer-two encryption may not be the best option. Instead, a solution such as tcpcrypt, which uses opportunistic encryption by adding TCP header options to encrypt all traffic, would provide greater security, backwards compatibility with legacy TCP stacks, and minimized strain from negotiations on the server (36 times more connections than SSL). Using this approach to encryption provides a fallback to standard encryption if the endpoint does not support the method, and tcpcrypt is capable of staggered deployment. Another important benefit is that tcpcrypt has no requirements for preshared keys (psk) or certificates. Tcpcrypt makes “end-to-end encryption of TCP traffic the default, not the exception” (Bittau, 2010). We face an additional major security challenge, as well: the current operating frequencies are known, limited, and therefore easily jammed. Although MANETs are capable of creating diverse communication paths across a network, allowing for versatile rerouting around localized interference and overcoming a common problem with radio-based technologies, broad-interference attempts could still cripple the network. MondoNet must address potential interference from the environment, and jamming attempts from totalitarian governments and other malicious actors because, as Frankel et al. (2007) establish, the current standard from the Institute of Electrical and Electronics Engineers (IEEE; 802.11) offers no defense against jamming or flooding (p. 39).

Fortunately, wireless bandwidth is becoming more difficult to disrupt. In 2008, the Federal Communications Commission (FCC) unanimously agreed to open a large portion of the unused wireless spectrum (frequencies previously reserved for analog television) for unlicensed use to white-spaces devices, or WSDs (Wu, Wang, Liu & Clancy, 2008, p. 9). In 2010, the FCC voted to distribute unlicensed spectrum for the first time in 25 years, setting two channels aside for wireless microphone use previously used by analog television (Kim, 2010). This newly available spectrum, with its longer wavelength and better penetration, will allow wireless broadband access as part of the FCC’s National Broadband Plan. The Office of Engineering and Technology (OET) has selected nine administrators to manage and maintain the white spaces database (FCC, 2011). WSDs are capable of detecting the local frequencies in use such as television stations and avoiding interference, which are outlined in proposals from IEEE 802.11, 802.22, and the White Spaces Coalition (IEEE, 2009; Stevenson, 2009; Bangeman, 2007). An opportunistic multihop ad hoc network would create intermediate nodes through distributed storage, which would alleviate some of the problems associated with a network where “user disconnection is a feature rather than an exception” (Conti, 2007). These intermediate nodes would store data when no nodes are prepared to receive it and then forward the data to other peers within transmission range. Conti notes that an opportunistic ad hoc multihop network is “well-suited for a world of pervasive devices equipped with various wireless networking technologies” such as WiFi, WiMAX, Bluetooth, ZigBee, and plug-in servers that are “frequently out of range from a global network but are in the range of other networked devices” (p. viii). It is now more realistic than ever to lay the groundwork for MondoNet. By employing some of the improvements outlined above, the B.A.T.M.A.N. project claims to have achieved a wireless mesh consisting of 4000. Other MANET routing protocol efforts include Babel, a distance vector routing protocol based on AODV, which utilizes a unique variation of the ETX link cost estimate instead of the hop metric used for most multihop ad hoc deliveries; WING, which provides added support for radio interfaces, uses Weighted Transmission Time (WCETT) routing metric, and allows automatic channel assignment; and Roofnet’s SrcRR, which also takes advantage of the ETX metric. Babel, WING, and Roofnet are useful because all are capable of eliminating transient routing loops commonly associated with MANETs, which helps reduce unnecessary route duplication.

Several individuals and organizations have proposed hardware to enable an effective MANET. These proposals include Freedom Boxes, which are “cheap, small, low-powered plug servers” (Dwyer, 2011) that run on Linux-based software; LANdroids, pocket-sized wireless network nodes that were created to travel autonomously with troops and disaster relief teams (Menke, 2010); OLPC XO, which is a Linux-based subcomputer built around an 802.11s WiFi mesh networking protocol; Roofnet, which, in addition to providing open source routing protocols, has set up 70 nodes in Portland, OR; and Solar Mesh, a solar-powered wireless mesh network developed by McMaster University, providing hotspot coverage and WiFi endstations. In addition to hardware and routing solutions, several other relevant projects and software frameworks are under development, such as GNUnet, a software framework allowing friend-to-friend (F2F) sharing; Daihinia, which transforms a common ad hoc WiFi network into an efficient multihop solution for communities; and SMesh, a hierarchical mesh network built upon Spines in which peers rely on the infrastructure to forward packets instead of relying on other peers. Finally, Open Mesh Project and Open Source Mesh are both recently launched projects that aim to establish effective mesh networks. Unlike MondoNet, these initiatives do not require network mobility, and have not presented a clearly delineated set of social specifications informing their technological development. However, we see these projects as valuable potential partners sharing a common set of goals and interests with MondoNet.

Pending Questions
An important set of remaining questions concerns the legal and regulatory environment for MondoNet. First, to ensure that the platform remains accessible to all potential users and available to current and future developers, we must undertake efforts to make sure that it is built on firm legal foundations. This means taking precautions not to violate any existing patents, as well as developing and sharing our own intellectual property under appropriate terms. Although we are already committed to an open license, there are dozens of free software licenses listed by the Open Source Initiative (n.d.), each offering a slightly different definition of openness and a slightly different solution for achieving it (Lamothe, 2006; Waugh & Metcalfe, 2008). Currently, our online documentation is licensed under a Creative Commons Attribution-Sharealike 3.0 license,2 but we have yet to establish which software licensing platform will be most effective in supporting our ambitions for MondoNet. Second, although we envision MondoNet as a tool made expressly for maintaining free communications in the face of institutional and governmental censorship, we will operate under the shelter of law to the greatest extent possible. This entails a number of concerns, the details of which may differ from region to region and from year to year. One example is the question of spectrum licensing; namely, what permission do individual citizens have to broadcast information at given energy levels within given frequency ranges? One of the benefits of using WiFi (specifically, IEEE 802.11) is its broad international recognition as a platform for consumer communications technologies. Of course, there are more powerful frequency ranges that may be available as well (e.g., lower frequency “white spaces” recently unlicensed in the U.S. by the FCC (FCC, 2011; IEEE, 2009; Wu, Wang, Liu, & Clancy, 2008). As we will discuss further below, a related question concerns the extent to which we can integrate these multiple networking standards into a single mesh framework.

2 Details about this license can be found at

Finally, there are questions of legality surrounding the kinds of information that may be shared on MondoNet, and the treaties, laws, and precedents regarding the liability of network operators and technology providers who enable such sharing. Most sovereignties have taken pains to distinguish between permitted and unpermitted speech (e.g., intellectual property infringement, politically inflammatory messages, child pornography), and have developed systems to coerce platform providers to surveil and police their user bases. In the United States, for instance, the Digital Millennium Copyright Act (DMCA) makes it a felony to provide technology that is “designed or produced for the purpose of circumventing a technological measure that effectively controls access to a [copyrighted] work.” Even more to the point, President Obama in 2010 requested Congress to draft a new law that would require all communications technology providers to create a “back door” enabling wiretapping functionality for law enforcement (Savage, 2010; other nations, such as United Arab Emirates and Saudi Arabia, have made similar moves in recent years). Given that the previous administration illegally used its wiretapping powers to surveil American citizens without a court order (Savage & Risen, 2010), we consider such back door functionality to be anathema to the MondoNet project. We therefore anticipate that it is likely that developers, distributors and users of MondoNet will come into conflict with American and international laws, regardless of the content or legal status of their individual communications. There are also several pending questions related to technologies and platforms. Creating a system that conforms to MondoNet’s social specifications poses several hurdles. In order to establish the credibility necessary to encourage users to adopt a new conceptualization of the internet, it is essential that MondoNet adheres to human-centered design principles. Maguire (2001) identified five benefits of an effective, usable system: Increased productivity, reduced errors, reduced training and support, improved acceptance, and enhanced reputation (p. 587). For users under duress during disaster relief efforts, and for populations under oppressive government censorship, system usability, security, and interface simplicity are especially essential. To achieve this end, several key issues still must be addressed, such as determining the optimal routing protocol and encryption method. When designing any MANET, the main goal is to reliably transmit data from one node to another while still delivering a reasonable quality of service given the resource-limited environment. Although Transmission Control Protocol (TCP) is the common standard at the transport layer, it has the notable disadvantage of being slower than User Datagram Protocol (UDP). Future investigations will look at the potential benefits of traffic dispersion using multipath routing, which could improve performance and reduce the amount of energy consumed between nodes (Karygiannis, Antonakakis & Apostolopoulos, 2006; Nácher et al., 2007). Since energy consumption is always an important consideration with mobile devices, answering the design questions limiting battery life such as more routing and security power-aware protocols (Toh, 2001; Liang, & Yuansheng, 2004) are paramount but hardware remedies such as the recent three-fold improvement to the lithium-ion battery using solid state technology (Voith, 2010) are also being investigated.

Security is one of the primary obstacles to adoption in a MANET because the system is peer-based. Although opportunistic encryption through tcpcrypt provides many benefits, opportunistic encryption is inherently susceptible to active attacks. However, Bittau et al. (2010) describe an interesting approach using session IDs to prevent active attacks using tcpcrypt in MANETs (p. 7). Detecting malicious nodes in a MANET poses security problems because, unlike wired networks, anonymous, participatory MANETs are currently incapable of monitoring traffic and therefore lack an Intrusion Detection System (IDS; Karygiannis, Antonakakis & Apostolopoulos, 2006). Before MondoNet can be safely, effectively utilized by populations communicating under governments hostile to open information exchange, rigorous security protocols must be in place. Ensuring anonymity is also a complicated process that could be accomplished through tcpcrypt and by performing network address translation (NAT). Another major area of complication arises when attempting to connect networks. On top of every host running a MANET routing protocol (e.g., OSLR, B.A.T.M.A.N.), there will be a device bridging the two networks together. A very simple example would be the relationship between a wireless device and a conventional wireless router. When a wireless device is connected to the router, the devices communicate via 802.11 (WiFi) but then the router converts the transmission to 802.3 (Ethernet) via network cable. Basically, a smartphone cannot directly communicate with a network cable and needs another device such as a wireless router to allow the WiFi equipped smartphone to send and receive packets from the network. This would similarly work on 802.11s, which is the new routing protocol specifically being developed according to IEEE standards defining how interconnected wireless devices communicate. 802.11s also provides a security protocol, Simultaneous Authorization of Equals (SAE), which, although inflexible, provides protection against passive, active, and dictionary attack (Hartkins, 2008). Finally, determining an open source platform to use on repurposed mobile devices will be important to further development. One serious possibility could be a pared-down version of Fedora Linux, similar to the OS used in the OLPC XO-1. In keeping with MondoNet’s commitment to an open source environment, the repurposed devices will utilize IEEE standard compliant open firmware, and a variant of the Xfce GUI. Although there are many additional pending technical questions, previous efforts by MANET and open source developers and researchers have helped illuminate the path toward the comprehensive solution proposed here.

In this article, we have outlined the theoretical rationale, social specifications, and initial technical considerations for a large-scale, ad hoc wireless mesh network, which we call MondoNet. Although we feel this is a promising start, we hope to develop the network from an abstract idea to a concrete reality in the coming years. In the near term, this means addressing the pending questions we have outlined above, and sharing ideas, information and technology with like-minded individuals and initiatives. Over the longer term, we aim to develop and test a prototype, and to distribute the resulting technology to users and communities that may have a need for it. Ideally, this initiative, like many other open source projects, should develop a life of its own, adapting itself to uses we haven’t even considered at present, and evolving with the changing needs and technical capacities of connected individuals around the globe.

ACLU (2008). Foreign Intelligence Surveillance Act (FISA). American Civil Liberties Union.
Retrieved from
Banks, J., & Humphreys, S. (2008). The labour of user co-creators: emergent social network markets? Convergence: The International Journal of Research into New Media Technologies, 14(4), 401-418. doi:10.1177/1354856508094660
Bangeman, E. (2007). The White Spaces Coalition’s plans for fast wireless broadband. Retrieved
Baym, N. (2010). Personal connections in the digital age. Cambridge UK ;;Malden MA: Polity.
Benkler, Y. (2006). The wealth of networks: how social production transforms markets and freedom. New Haven: Yale University Press.
Bittau, A., Hamburg, M., Handley, M., Mazieres, D., & Boneh, D. (2010). The case for ubiquitous transport-level encryption. In USENIX Security, 10(1).
Clinton, H. R. (2011). Internet rights and wrongs: choices & challenges in a networked world.
Retrieved from
Conti, M. (2007). Mobile multi-hop ad hoc networks from theory to reality. New York, NY: Hindawi Publishers.
Coté, M., & Pybus, J. (2007). Learning to immaterial labour 2.0: MySpace and social networks. Ephemera Theory and Politics in Organizations, 7(1), 880–106.
Dhanalakshmi, S., & Rajaram, M. (2008). A reliable and secure framework for detection and isolation of malicious nodes in MANET, IJCSNS International Journal of Computer
Science and Network Security, 8(10).
DeNardis, L. (2010). The emerging field of internet governance. Yale Information Society Project Working Paper Series. Retrieved from
Deuze, M. (2006). Participation, remediation, bricolage: considering principal components of a digital culture. The Information Society, 22(2), 63-75. doi:10.1080/01972240600567170
Dwyer, J. (2011). Decentralizing the internet so big brother can’t find you. Retrieved from
FCC. (2010). National broadband plan. Retrieved from
FCC. (2011). TV band (white spaces) administrator’s Guide. Retrieved from
Frankel, S., Eydt, B., Owens, L., & Scarfone, K. (2007). Establishing wireless robust security networks: a guide to IEEE 802.11i. National Institute of Standards and Technology.
Gada, D., Gogri, R., Rathod, P., Dedhia, Z., Mody, N., Sanyal, S. & Abraham, A. (2004). A distributed security scheme for ad hoc networks. ACM Publications, 11(1), 5-15.
Hartkins, D. (2008). Simultaneous authentication of equals: a secure, password-based key exchange for mesh networks. Proceedings of the 2008 Second International Conference on Sensor Technologies and Applications.
IEEE. (2009). 802 LAN/MAN standards committee 802.22 WG on WRANs (wireless regional area networks). Retrieved from
Jenkins, H. (2008). Convergence culture : where old and new media collide. New York: New York University Press.
Karygiannis, A., Antonakakis, E., Apostolopoulos, A. (2006). Detecting critical nodes for MANET intrusion detection systems. Second International Workshop on Security,
Privacy and Trust in Pervasive and Ubiquitous Computing (SecPerU’06).
Khokhar, R.H., Ngadi, M.A., & Mandala, S. (2008). A review of current routing attacks in mobile ad hoc networks. International Journal of Computer Science and Security, 2(3),18-29.
Lamothe, A. (2006). Degrees of openness. Retrieved from
Lessig, L. (2006). Code (2nd ed.). New York: Basic Books.
Liptak, A. (2007). Verizon blocks messages of abortion rights group. New York Times. Retrieved from
Maguire, M. (2001). Methods to support human centered design. Internation Journal of Computer-Human Studies, 55, 587-634.
Marra, W. (2007). Pearl Jam’s anti-Bush lyrics jammed by AT&T. ABC News. Retrieved from
Menke, S. M. (2010). Retrieved from
Moglen, E. (2010). Freedom In the cloud: software freedom, privacy, and security for web 2.0 and cloud computing. Internet Society New York Branch. Retrieved from
Mueller, M. (2002). Dancing the quango: ICANN and the privatization of international governance. In Conference on New Technologies and International Governance (School of Advanced International Relations, Johns Hopkins University, Washington, DC.
Nácher, M., Calafate, C. T., Cano, J., & Manzoni, P. (2007). Comparing tcp and udp performance in MANETS using multipath enhanced versions of dsr and dymo. Proceedings of the 4th ACM workshop on Performance evaluation of wireless ad hoc, sensor,and ubiquitous networks.
Rheingold, H. (2002). Smart mobs: the next social revolution. Cambridge, MA: Perseus.
Savage, C. & Risen, J. (2010, Mar. 31). Federal judge finds N.S.A. wiretaps were illegal. Retrieved from
Savage, C. (2010, Sep. 27). U.S. Tries to make It easier to wiretap the internet. The New York Times. Retrieved from
Schollmeier, R. (2002). A definition of peer-to-peer networking for the classification of peer-to-peer architectures and applications. Proceedings of the First International Conference on Peer-to-Peer Computing, IEEE.
Sinnreich, A. (2010). Mashed up : music, technology, and the rise of configurable culture. Amherst: University of Massachusetts Press.
Sinnreich, A., & Zager, M. (2008). E-speech: the (uncertain) future of free expression. truthdig. Retrieved from
Stevenson, C., Zhongding Lei, G.C., Hu, W., Shellhammer, S. & Caldwell, W. (2009). IEEE 802.22: the first cognitive radio wireless regional area networks (WRANs) Standard.
IEEE Communications Magazine, 47(1), 130–138.
Strickling, L. E. (2010). Letter re: national broadband plan. GN Doc. No. 09-5. Retrieved from
Turner, F. (2008). From counterculture to cyberculture : Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.
Turner, S. D. (2009). Dismantling digital deregulation: toward a national broadband strategy. Retrieved from
Waugh, P. & Metcalfe, R. (2008). The foundations of open: evaluating aspects of openness in software projects. Retrieved from
Wu, T. (2010). The master switch: the rise and fall of information empires. New York: Knopf.
Wu, Y., Wang, B., Liu, K. J., & Clancy, T. C. (2008). Repeated open spectrum sharing game with cheat-proof strategies. IEEE Transactions on Wireless Communications, 20(20), 1 12.
Zittrain, J. (2009). Lost in the cloud. The New York Times. Retrieved from

City of Austin’s Wireless Mesh Network System

by Klint Finley / January 28, 2011

In Cory Doctorow’s young adult novel Little Brother, the protagonist starts a wireless ad-hoc network, called X-Net, in response to a government crack-down on civil liberties. The characters use gaming systems with mesh networking equipment built-in to share files, exchange message and make plans.

The Internet blackout in Egypt, which we’ve been covering, touches on an issue we’ve raised occasionally here: the control of governments (and corporations) over the Internet (and by extension, the cloud). One possible solution, discussed by geeks for years, is the creation of wireless ad-hoc networks like the one in Little Brother to eliminate the need for centralized hardware and network connectivity. It’s the sort of technology that’s valuable not just for insuring both freedom of speech (not to mention freedom of commerce – Egypt’s Internet blackout can’t be good for business), but could be valuable in emergencies such as natural disasters as well.

Here are a few projects working to create such networks. Wireless ad-hoc networking has been limited in the past by a bottleneck problem. Researchers may have solved this issue for devices with enough computational power. The U.S. military is alsoinvesting in research in this area.

The OLPC’s XO has meshnetworking capabilities. And some gaming systems, such as the Nintendo DS, have mesh networking built in. But we want to look at projects that are specifically aimed at replacing or augmenting the public Internet.

Openet is a part of the open_sailing project. Openet’s goal is to create a civilian Internet outside of the control of governments and corporations. It aims to not only create local mesh networks, but to build a global mesh network of mesh networks stitched together by long range  packet radio. See our previous coverage here.

Netsukuku is a project of the Italian group FreakNet MediaLab. Netsukuku is designed to be a distributed, anonymous mesh network that relies only on normal wireless network cards. FreakNet is even building its own domain name architecture. Unfortunately, there’s no stable release of the code and the web site was last updated in September 2009.

Not to be confused with the mesh networking hardware vendor of the same name, OPENMESH is a forum created by venture captalist Shervin Pishevar for volunteers interested in building mesh networks for people living in conditions where Internet access may be limited or controlled.

Pishevar came up with the idea during the protests in Iran in 2009. “The last bastion of the dictatorship is the router,” he told us. The events in Egypt inspired him to get started.

It’s a younger project than Openet and Netsukuku, but it may have more mainstream appeal thanks to being backed by Pishevar. It’s not clear how far along Openet is, and Netsukuku’s seems to be completely stalled so a new project isn’t entirely unreasonable. Update: One commenter points out that Netsukuku’s developers have checked in code as recently as two weeks ago, so although the site hasn’t been updated the project isn’t stalled.
by Klint Finley / January 31, 2011

Last week we told you about three projects to create a government-less Internet by taking advantage of wireless mesh networking. Wireless mesh networks are networks that don’t require a centralized authority to create networks. These can provide an alternative way to communicate and share information during a crisis such as a natural disaster or civic unrest.

Many of you followed-up by telling us about several other interesting projects, such as P2P DNS to Tonkia. Most importantly, there are at least four other projects that should have been on our original list.

Daihinia is a commercial project that provides software that essentially turns Windows PCs into wireless repeaters. The company’s software makes it possible to use a desktop or laptop with a normal wireless card to “hop” to a wireless access point while out of range of that access point. There’s no Macintosh version, but it’s being discussed.

Digitata is a sub-project of open_sailing‘s Openet, which we mentioned in the previous installment. Digitata is focused on bringing wireless networks to rural areas of Africa. The group is creating open source hardware and software, including its own own IP layer for mesh networking called IPvPosition (IPvP).

Freifunk (German for “free radio”) is an organization dedicated to providing information and resources for mesh networking projects. Its website has a list of local mesh networks all over the world, from Afghanistan to Nepal to Seattle.

One of its main resources is the Freifunk firmware, a free router firmware optimized for wireless mesh networking. Users can replace the standard firmware on their routers with Frefunk’s firmware, enabling them to build mesh networks with cheap off the shelf hardware.

Freifunk also develops a protocol  Better Approach To Mobile Adhoc Networking, or B.A.T.M.A.N., an alternative to the older  Optimized Link State Routing Protocol (OLSR).

wlan ljubljana and nodewatcher
wlan ljubljana is a wireless mesh network in Ljubljana, Slovenia. In addition to providing its users with Internet access, it appears to also feature a local network.

wlan ljubljana is working with volunteers in other cities in Slovenia to create more local networks, and has created its own firmware package for routers called nodewatcher. Like Freifunk, nodewatcher is based on the embeddable Linux distribution OpenWrt. nodewatcher is designed to be easy to use for a non-technical user.

More Resources
Here are a few more resources:

by Rick Wash / at SOUPS (Symposium on Usable Privacy and Security)

Home computer systems are insecure because they are administered by untrained users. The rise of botnets has amplified this problem; attackers compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which expert security advice to follow: four conceptualizations of ‘viruses’ and other malware, and four conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring expert security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they cleverly take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

Home users are installing paid and free home security software at a rapidly increasing rate.{1} These systems include anti-virus software, anti-spyware software, personal firewall software, personal intrusion detection / prevention systems, computer login / password / fingerprint systems, and intrusion recovery software. Nonetheless, security intrusions and the costs they impose on other network users are also increasing. One possibility is that home users are starting to become well-informed about security risks, and that soon enough of them will protect their systems that the problem will resolve itself. However, given the “arms race” history in most other areas of networked security (with intruders becoming increasingly sophisticated and numerous over time), it is likely that the lack of user sophistication and non-compliance with recommended security system usage policies will continue to limit home computer security effectiveness. To design better security technologies, it helps to understand how users make security decisions, and to characterize the security problems that result from these decisions. To this end, I have conducted a qualitative study to understand users’ mental models [18, 11] of attackers and security technologies. Mental models describe how a user thinks about a problem; it is the model in the person’s mind of how things work. People use these models to make decisions about the effects of various actions [17]. In particular, I investigate the existence of folk models for home computer users. Folk models are mental models that are not necessarily accurate in the real world, thus leading to erroneous decision making, but are shared among similar members of a culture[11]. It is well-known that in technological contexts users often operate with incorrect folk models [1]. To understand the rationale for home users’ behavior, it is important to understand the decision model that people use. If technology is designed on the assumption that users have correct mental models of security threats and security systems, it will not induce the desired behavior when they are in fact making choices according to a different model. As an example, Kempton [19] studied folk models of thermostat technology in an attempt to understand the wasted energy that stems from poor choices in home heating. He found that his respondents possessed one of two mental models for how a thermostat works. Both models can lead to poor decisions, and both models can lead to correct decisions that the other model gets wrong. Kempton concludes that “Technical experts will evaluate folk theory from this perspective [correctness] – not by asking whether it fulfills the needs of the folk. But it is the latter criterion […] on which sound public policy must be based.” The same argument holds for technology design: whether the folk models are correct or not, technology should be designed to work well with the folk models actually employed by users.{2} For home computer security, I study two related research questions: 1) Potential threats : How do home computer users conceptualize the information security threats that they face? 2) Security responses : How do home computer users apply their mental models of security threats to make security-relevant decisions? Despite my focus on “home computer users,” many of these problems extend beyond the home; most of my analysis and understanding in this paper is likely to generalize to a whole class of users who are unsophisticated in their security decisions. This includes many university computers, computers in small business that lack IT support, and personal computers used for business purposes.

{1} Despite a worldwide recession, the computer security industry grew 18.6% in 2008, totaling over $13 billion according to a recent Gartner report [9]
{2} It may be that users can be re-educated to use more correct mental models, but generally it more difficult to re-educate

1.1 Understanding Security
Managing the security of a computer system is very difficult. Ross Anderson’s [2] study of Automated Teller Machine (ATM) fraud found that the majority of the fraud committed using these machines was not due to technical flaws, but to errors in deployment and management failures. These problems illustrate the difficulty that even professionals face in producing effective security. The vast ma jority of home computers are administered by people who have little security knowledge or training. Existing research has investigated how non-expert users deal with security and network administration in a home environment. Dourish et al. [12] conducted a related study, inquiring not into mental models but how corporate knowledge workers handled security issues. Gross and Rossum [15] also studied what security knowledge end users posses in the context of large organizations. And Grinter et al. [14] interviewed home network users about their network administration practices. Combining the results from these papers, it appears that many users exert much effort to avoid security decisions. All three papers report that users often find ways to delegate the responsibility for security to some external entity; this entity could be technological (like a firewall), social (another person or IT staff ), or institutional (like a bank). Users do this because they feel like they don’t have the skills to maintain proper security. However, despite this delegation of responsibility, many users still make numerous security-related decisions on a regular basis. These papers do not explain how those decisions get made; rather, they focus mostly on the anxiety these decisions create. I add structure to these observations by describing how folk models enable home computer users to make security decisions they cannot delegate. I also focus on differences between people, and characterize different methods of dealing with security issues rather than trying to find general patterns. The folk models I describe may explain differences observed between users in these studies. Camp [6] proposed using mental models as a framework for communicating complex security risks to the general populace. She did not study how people currently think about security, but proposed five possible models that may be useful. These models take the form of analogies or metaphors with other similar situations: physical security, medical risks, crime, warfare, and markets. Asghapour et al. [3] built on this by conducting a card sorting experiment that matches these analogies with the mental models of uses. They found that experts and non-experts show sharp differences in which analogy their mental model is closest to. Camp et al. began by assuming a small set of analogies that they believe function as mental models. Rather than pre-defining the range of posssible models, I treat these mental models as a legitimate area for inductive investigation, and endeavor to uncover users’ mental models in whatever form they take. This prior work confirms that the concept of mental models may be useful for home computer security, but made assumptions which may or may not be appropriate. I fill in the gap by inductively developing an understanding of just what mental models people actually possess. Also, given the vulnerability of home computers and this finding that experts and non-experts differ sharply [3], I focus solely on non-expert home computer users. Herley [16] argues that non-expert users reject security advice because it is rational do to so. He believes that security experts provide advice that ignores the costs of the users’ time and effort, and therefore overestimates the net value of security. I agree, though I dig deeper into understanding how users actually make these security / effort tradeoffs.

1.2 Botnets and Home Computer Security
In the past, computers were targeted by hackers approximately in proportion to the amount of value stored on them or accessible from them. Computers that stored valuable information, such as bank computers, were a common target, while home computers were fairly innocuous. Recently, attackers have used a technique known as a ‘botnet,’ where they hack into a number of computers and install special ‘control’ software on those computers. The hacker can give a master control computer a single command, and it will be carried out by all of the compromised computers (called zombies) it is connected to [4, 5]. This technology enables crimes that require large numbers of computers, such as spam, click fraud, and distributed denial of service [26]. Observed botnets range in size from a couple hundred zombies to 50,000 or more zombies. As John Markoff of the New York Times observes, botnets are not technologically novel; rather, “what is new is the vastly escalating scale of the problem” [21]. Since any computer with an Internet connection will be an effective zombie, hackers have logically turned to attacking the most vulnerable population: home computers. Home computer users are usually untrained and have few technical skills. While some software has improved the average level of security of this class of computers, home computers still represent the largest population of vulnerable computers. When compromised, these computers are often used to commit crimes against third parties. The vulnerability of home computers is a security problem for many companies and individuals who are the victims of these crimes, even if their own computers are secure [7].

1.3 Methods
I conducted a qualitative inquiry into how home computer users understand and think about potential threats. To develop depth in my exploration of the folk models of security, I used an iterative methodology as is common in qualitative research [24]. I conducted multiple rounds of interviews punctuated with periods of analysis and tentative conclusions. The first round of 23 semi-structured interviews was conducted in Summer 2007. Preliminary analysis proceeded throughout the academic year, and a second round of 10 interviews was conducted in Summer 2008, for a total of 33 respondents. This second round was more focused, and specifically searched for negative cases of earlier results [24]. Interviews averaged 45 minutes each; they were audio recorded and transcribed for analysis. Respondents were chosen from a snowball sample [20] of home computer users evenly divided between three mid-western U.S. cities. I began with a few home computer users that I knew in these cities. I asked them to refer me to others in the area who might be information-rich informants. I screened these potential respondents to exclude people who had expertise or training in computers or computer security. From those not excluded, I purposefully selected respondents for maximum variation [20]: I chose respondents from a wide variety of backgrounds, ages, and socio-economic classes. Ages ranged from undergraduate (19 years old) up through retired (over 70). Socio-economic status was not explicitly measured, but ranged from recently graduated artist living in a small efficiency up to a successful executive who owns a large house overlooking the main river through town. Selecting for maximal variation allows me to document diverse variations in folk models and identify important common patterns [20]. After interviewing the chosen respondents, I grew by potential interview pool by asking them to refer me to more people with home computers who might provide useful information. This snowballing through recommendations ensured that the contacted respondents would be information-rich [20] and cooperative. These new potential respondents were also screened, selected, and interviewed. The method does not generate a sample that is representative of the population of home computer users. However, I don’t believe that the sample is a particularly special or unusual group; it is likely that there are other people like them in the larger population.

I developed an (IRB approved) face-to-face semi-structured interview protocol that pushes sub jects to describe and use their mental models, based on formal methods presented by D’Andrade [11]. I specifically probed for past instances where the respondents would have had to use their mental model to make decisions, such as past instances of security problems, or efforts undertaken to protect their computers. By asking about instances where the model was applied to make decisions, I enabled the respondents to uncover beliefs that they might not have been consciously aware of. This also ensures that the respondents believe their model enough to base choices on it. The ma jority of each interview was spent on follow-up questions, probing deeper into the responses of the sub ject. This method allows me to describe specific, detailed mental models that my participants use to make security decisions, and to be confident that these are models that the participants actually believe. My focus in the first round was broad and exploratory. I asked about any security-related problems the respondent had faced or was worried about; I also specifically asked about viruses, hackers, data loss, and data exposure (identity theft). I probed to discover what countermeasures the respondents used to mitigate these risks. Since this was a semi-structured interview, I followed up on many responses by probing for more information. After preliminary analysis of this data, I drew some tentative conclusions and listed points that needed clarification. To better elucidate these models and to look for negative cases, I conducted 10 second-round interviews using a new (IRB approved) interview protocol. In this round, I focused more on three specific threats that sub jects face: viruses, hackers, and identity theft. For this second round, I also used an additional interviewing technique: hypothetical scenarios. This technique was developed to help focus the respondents and elicit additional information not present in the first round of interviews. I presented the respondents with three hypothetical scenarios and asked the sub jects for their reaction. The three scenarios correspond to each of the three main themes for the second round: finding out you have a virus, finding out a hacker has conpromised your computer, and being informed that you are a victim of identity theft. For each scenario, after the initial description and respondent reaction, I added an additional piece of information that contradicted the mental models I discovered after the first round. For example, one preliminary finding from the first round was that people rarely talked about the creation of computer viruses; it was unclear how they would react to a computer virus that was created by people for a purpose. In the virus scenario, I informed the respondents that the virus in question was written by the Russian mafia. This fact was taken out of recent news linking the Russian mafia to widespread viruses such as Netsky, Bagle, and Storm.{3} Once I had all of the data collected and transcribed, I conducted both inductive and deductive coding of the data to look both for predetermined and emergent themes [23]. I began with a short list of ma jor themes I expected to see from my pilot interviews, such as information about viruses, hackers, identity theft, countermeasures, and sources of information. I identified and labeled (coded) instances when the respondents discussed these themes. I then expanded the list of codes as I noticed interesting themes and patterns emerging. Once all of the data was coded, I summarized the data on each topic by building a data matrix [23].{4} This data matrix helped me to identify basic patterns in the data across sub jects, to check for representativeness, and to look for negative cases [24].

After building the initial summary matrices, I identified patterns in the way respondents talked about each topic, paying specific attention to word choices, metaphors employed, and explicit content of statements. Specifically, I looked for themes in which users differ in their opinions (negative case analysis). These themes became the building blocks for the mental models. I built a second matrix that matched sub jects with these features of mental models.{5} This second matrix allowed me to identify and characterize the various mental models that I encountered. Table 7 in the Appendix shows which participants from Round 2 had each of the 8 models. A similar table was developed for the Round 1 participants. I then took the description of the model back to the data, verified when the model description accurately represented the respondents descriptions, and looked for contradictory evidence and negative cases [24]. This allowed me to update the models with new information or insights garnered by following up on surprises and incorporating outliers. This was an iterative process; I continued updating model descriptions, looking for negative cases, and checking for representativeness until I felt that the model descriptions I had accurately represented the data. In this process, I developed further matrices as data visualizations, some of which appear in my descriptions below.

{4} A fragment of this matrix can be seen in Table 5 in the Appendix.
{5} A fragment of this matrix is Table 6 in the Appendix.

I identified a number of different folk models in the data. Every folk model was shared by multiple respondents in this study. The purpose of qualitative research is not to generalize to a population; rather, it is to explore phenomenon in depth. To avoid misleading readers, I do not report how many users possessed each folk model. Instead, I describe the full range of folk models I observed. I divide the folk models into two broad categories based on a distinction that most sub jects possessed: 1) models about viruses, spyware, adware, and other forms of malware which everyone refered to under the umbrella term ‘virus’; and 2) models about the attackers, referred to as ‘hackers,’ and the threat of ‘breaking in to’ a computer. Each respondent had at least one model from each of the two categories. For example, Nicole {6} believed that viruses were mischievous, and hackers are criminals who target big fish. These models are not necessarily mutually exclusive. For example, a few respondents talked about different types of hackers and would describe more than one folk model of hackers. Note that by listing and describing these folk models, in no way do I intend to imply that these models are incorrect or bad in any way. They are all certainly incomplete, and do not exactly correspond to the way malicious software or malicious computer users behave. But, as Kempton [19] learned in his study of home thermostats, what is important is not how accurate the model is but how well it serves the needs of the home computer user in making security decisions. Additionally, there is not “correct” model that can serve as a comparison. Even security experts will disagree as to the correct way to think about viruses or hackers. To show an extreme example, Medin et al. [22] conducted a study of expert fishermen in the Northwoods of Wisconsin. They looked at the mental models of both Native American fishermen and of majority-culture fishermen. Despite both groups being experts, the two groups showed dramatic differences in the way fish were categorized and classified. Majority-culture fishermen grouped fish into standard taxonomic and goal-oriented groupings, while Native American fishermen groups fish mostly by ecological niche. This illustrates how even experts can have dramatically different mental models of the same phenomenon, and any single expert’s model is not necessarily correct. However, experts and novices do tend to have very different models; Asgharpour et al. [3] found strong differences between expert and novice computer users in their mental models of security.

Common Elements of Folk Models
Most respondents made a distinction between ‘viruses’ and ‘hackers.’ To them, these are two separate threats that can both cause problems. Some people believed that viruses are created by hackers, but they still usually saw them as distinct threats. A few respondents realized this and tried to describe the difference; for example at one point in the interview Irving tries to explain the distinction by saying “The hacker is an individual hacking, while the virus is a program infecting.” After some thought, he clarifies his idea of the difference a bit: “So it’s a difference between something automatic and more personal.” This description is characteristic of how many respondents think about the difference: viruses are usually more programatic and automatic, where hacking is more like manual labor, requiring the hacker to be sitting in front of a computer entering commands. This distinction between hackers and viruses is not something that most of the respondents had thought about; it existed in their mental model but not at a conscious level. Upon prompting, Dana decides that “I guess if they hack into your system and get a virus on there, it’s gonna be the same thing.” She had never realized that they were distinct in her mind, but it makes sense to her that they might be related. She then goes on to ask the interviewer if she gets hacked, can she forward it on to other people? This also illustrates another common feature of these interviews. When exposed to new information, most of the respondents would extrapolate and try to apply that information to slightly different settings. When Dana was prompted to think about the relationship between viruses and hackers, she decided that they were more similar than she had previously realized. Then she began to apply ideas from one model (viruses spreading) to the other model (can hackers spread also?) by extrapolating from her current models. This is a common technique in human learning and sensemaking [25]. I suspect that many details of the mental models were formed in this way. Extrapolation is also useful for analysis; how respondents extrapolate from new information reveals details about mental models that are not consciously salient during interviews [8, 11]. During the interviews I used a number of prompts that were intended to challenge mental models and force users to extrapolate in order to help surface more elements of their mental models.

2.1 Models of Viruses and other Malware
All of the respondents had heard of computer viruses and possessed some mental model of their effects and transmission. The respondents focused their discussion primarily on the effects of viruses and the possible methods of transmission. In the second round of interviews, I prompted respondents to discuss how and why viruses are created by asking them to react to a number of hypothetical scenarios. These scenarios help me understand how the respondents apply these models to make security-relevant decisions. All of the respondents used the term ‘virus’ as a catch-all term for malicious software. Everyone seemed to recognize that viruses are computer programs. Almost all of the respondents classify many different types of malicious software under this term: computer viruses, worms, tro jans, adware, spyware, and keyloggers were all mentioned as ‘viruses.’ The respondents don’t make the distinctions that most experts do; they just call any malicious computer program a ‘virus.’ Thanks to the term ‘virus,’ all of the respondents used some sort of medical terminology to describe the actions of malware. Getting malware on your computer means you have ‘caught’ the virus, and your computer is ‘infected.’ Everyone who had a Mac seemed to believe that Macs are ‘immune’ to virus and hacking problems (but were worried anyway).

Overall, I found four distinct folk models of ‘viruses.’ These models differed in a number of ways. One of the major differences is how well-specified and detailed the model was, and therefore how useful the model was for making security-related decisions. One model was very under-specified, labeling viruses as simply ‘bad.’ Respondents with this model had trouble using it to make any kind of security-related decisions because the model didn’t contain enough information to provide guidance. Two other models (the Mischief and Crime models) were fairly well-described, including how viruses are created and why, and what the ma jor effects of viruses are. Respondents with these models could use them to extrapolate many different situations and use them to make many security-related decisions on their computer. Table 1 summarizes the major differences between the four models.

{6} All respondents have been given pseudonyms for anonymity.

2.1.1 Viruses are Generically ‘Bad’
A few subjects had a very under-developed model of viruses. These subjects knew that viruses cause problems, but these sub jects couldn’t really describe the problems that viruses cause. They just knew that they were generically ‘bad’ to get and should be avoided. Respondents with this model knew of a number of different ways that viruses are transmitted. These transmission methods seemed to be things that the subjects had heard about somewhere, but the respondents did not attempt to understand these or organize them into a more coherent mental model. Zoe believed that viruses can come from strange emails, or from “searching random things” on the Internet. She says she had heard that blocking popups helps with viruses too, and seemed to believe that without questioning. Peggy had heard that viruses can come from “blinky ads like you’ve won a million bucks.” Respondents with this model are uniformly unconcerned with getting viruses: “I guess just my lack of really doing much on the Internet makes me feel like I’m safer.” (Zoe). A couple of people with this model use Macintosh computers, which they believe to be “immune” to computer viruses. Since they are immune, it seems that they have not bothered to form a more complete model of viruses. Since these users are not concerned with viruses, they do not take any precautions against being infected. These users believe that their current behavior doesn’t really make them vulnerable, so they don’t need to go to any extra effort. Only one respondent with this model uses an anti-virus program, but that is because it came installed on the computer. These respondents seem to recognize that anti-virus software might help, but are not concerned enough to purchase or install it.

2.1.2 Viruses are Buggy Software
One group of respondents saw computer viruses as an exceptionally bug-ridden form of regular computer software. In many ways, these respondents believe that viruses behave much like most of the other software that home users experience. But to be a virus, it has to be ‘bad’ in some additional way. Primarily, viruses are ‘bad’ in that they are poorly written software. They lead to a multitude of bugs and other errors in the computer. They bring out bugs in other pieces of software. They tend to have more bugs, and worse bugs, than most other pieces of software. But all of the effects they cause are the same types of effects you get from buggy software: viruses can cause computers to crash, or to “boot me out” (Erica) of applications that are running; viruses can accidentally delete or “wipe out” information (Christine and Erica); they can erase important system files. In general, the computer just “doesn’t function properly” (Erica) when it has a virus. Just like normal software, viruses must be intentionally placed on the computer and executed. Viruses do not just appear on a computer. Rather than ‘catching’ a virus, computers are actively infected, though often this infection is accidental. Some viruses come in the form of email attachments. But they are not a threat unless you actually “click” on the attachment to run it. If you are careful about what you click on, then you won’t get the virus. Another example is that viruses can be downloaded from websites, much like many other applications. Erica believes that sometimes downloading games can end up causing you to download a virus. But still, intentional downloading and execution is necessary to be infected with a virus, much the same way that intentional downloading and execution is necessary to run programs from the Internet. Respondents with this model did not feel that they needed to exert a lot of effort to protect themselves from viruses. Mostly, these users tried to not download and execute programs that they didn’t trust. Sarah intentionally “limits herself ” by not downloading any programs from the Internet so she doesn’t get a virus. Since viruses must be actively executed, anti-virus program are not important. As long as no one downloads and runs programs from the Internet, no virus can get onto the computer. Therefore, anti-virus programs that detect and fix viruses aren’t needed. However, two respondents with this model run anti-virus software just in case a virus is accidentally put on the computer. Overall, this is a somewhat underdeveloped mental model of viruses. Respondents who possessed this model had never really thought about how viruses are created, or why. When asked, they talk about how they haven’t thought about it, and then make guesses about how ‘bad people’ might be the ones who create them. These respondents haven’t put too much thought into their mental model of viruses; all of the effects they discuss are either effects they have seen or more extreme versions of bugs they have seen in other software. Christine says “I guess I would know [if I had a virus], wouldn’t I?” presuming that any effects the virus has would be evident in the behavior of the computer. No connection is made between hackers and viruses; they are distinct and separate entities in the respondent’s mind.

2.1.3 Viruses Cause Mischief
A good number of respondents believed that viruses are pieces of software that are intentionally annoying. Someone created the virus for the purpose of annoying computer users and causing mischief. Viruses sometimes have effects that are often much like extreme versions of annoying bugs: crashing your computer, deleting important files so your computer won’t boot, etc. Often the effects of viruses are intentionally annoying such as displaying a skull and crossbones upon boot (Bob), displaying advertising popups (Floyd), or downloading lots of pornography (Dana). While these respondents believe that viruses are created to be annoying, they rarely have a well-developed idea of who created them. They don’t naturally mention a creator for the viruses, just a reason why they are created. When pushed, these respondents will talk about how they are probably created by “hackers” who fit the Graffiti hacker model below. But the identity of the creator doesn’t play much of a role in making security decisions with this model. Respondents with this model always believe that viruses can be “caught” by actively clicking on them and executing them. However, most respondents with this model also believe that viruses can be “caught” by simply visiting the wrong webpages. Infection here is very passive and can come from just from visiting the webpage. These webpages are often considered to be part of the ‘bad’ part of the Internet. Much like graffiti appears in the ‘bad’ parts of cities, mischievous viruses are most prevalent on the bad parts of the Internet. While most everyone believes that care in clicking on attachments or downloads is important, these respondents also try to be careful about where they go on the Internet. One respondent (Floyd) tries to explain why: cookies are automatically put on your computer by websites, and therefore, viruses being automatically put on your computer could be related to this. These ‘bad’ parts of the Internet where you can easily contract viruses are frequently described as morally ambiguous webpages. Pornography is always considered shady, but some respondents also included entertainment websites where you can play games, and websites that have been on the news like “MySpaceBook” (Gina). Some respondents believed that a “secured” website would not lead to a virus, but Gail acknowledged that at some sites “maybe the protection wasn’t working at those sites and they went bad.” (Note the passive tense; again, she has not thought about how site go bad or who causes them to go bad. She is just concerned with the outcome.)

2.1.4 Viruses Support Crime
Finally, some respondents believe that viruses are created to support criminal activities. Almost uniformly, these respondents believe that identity theft is the end goal of the criminals who create these viruses, and the viruses assist them by stealing personal and financial information from individual computers. For example, respondents with this model worry that viruses are looking for credit card numbers, bank account information, or other financial information stored on their computer. Since the main purpose of these viruses is to collect information, the respondents who have this model believe that viruses often remain undetected on computers. These viruses do not explicitly cause harm to the computer, and they do not cause bugs, crashes, or other problems. All they do is send information to criminals. Therefore, it is important to run an anti-virus program on a regular basis because it is possible to have a virus on your computer without knowing it. Since viruses don’t harm your computer, backups are not necessary. People with this model believed that there are many different ways for these viruses to spread. Some viruses spread through downloads and attachments. Other viruses can spread “automatically,” without requiring any actions by the user of the computer. Also, some people believe that hackers will install this type of virus onto the computer when they break in. Given this wide variety of transmission methods and the serious nature of identity theft, respondents with this model took many steps to try to stop these viruses. These users would work to keep their anti-virus up to date, purchasing new versions on a regular basis. Often, they would notice when the anti-virus would conduct a scan of their computer and check the results. Valerie would even turn her computer off when it is not in use to avoid potential problems with viruses.

2.1.5 Multiple Types of Viruses
A couple of respondents discussed multiple types of viruses on the Internet. These respondents believed that some viruses are mischievous and cause annoying problems, while other viruses support crime and are difficult to detect. All users that talked about more than one type of virus talked about both of the previous two virus folk models: the mischievous viruses and the criminal viruses. One respondent, Jack, also talked about a third type of virus that was created by anti-virus companies, but he seemed like he felt this was a conspiracy theory, and consequently didn’t take that suggestion very seriously. For the respondents with multiple models, they generally would take all of the precautions that either model would predict. For example, they would make regular backups in case they caught a mischievous virus that damaged their computer, but they also would regularly run their anti-virus program to detect the criminal viruses that don’t have noticeable effects. This fact suggests that information sharing between users may be beneficial; when users believe in multiple types of viruses, they take appropriate steps to protect against all types.

2.2 Models of Hackers and Break-ins
The second ma jor category of folk models describe the attackers, or the people who cause Internet security problems. These attackers are always given the name “hackers,” and all of the respondents seemed to have some concept of who these people were and what they did. The term “hacker” was applied to describe anyone who does bad things on the Internet, no matter who they are or how they work. All of the respondents describe the main threat that hackers pose as “breaking in” to their computer. They would disagree as to why a hacker would want to “break in” to a computer, and to which computers they would target for their break ins, but everyone agreed on the terminology for this basic action. To the respondents, breaking in to a computer meant that the hacker could then use the computer as if they were sitting in front of it, and could cause a number of different things to happen to the computer. Many respondents stated that they did not understand how this worked, but they still believed it was possible. My respondents described four distinct folk models of hackers. These models differed mainly in who they believed these hackers were, what they believed motivated these people, and how they chose which computers to break in to. Table 2 summarizes the four folk models of hackers.

2.2.1 Hackers are Digital Graffiti Artists
One group of respondents believe that hackers are technically skilled people causing mischief. There is a collection of individuals, usually called “hackers,” that use computers to cause a technological version of mischief. Often these users are envisioned as “college-age computer types” (Kenneth). They see hacking computers as sort of digital graffiti; hackers break in to computers and intentionally cause problems so they can show off to their friends. Victim computers are a canvas for their art. When respondents with this model talked about hackers, they usually focused on two features: strong technical skills and the lack of proper moral restraint. Strong technical skills provide the motivation; hackers do it ”for sheer sport” (Lorna) or to demonstrate technical prowess (Hayley). Some respondents envision a competition between hackers, where more sophisticated viruses or hacks “prove you’re a better hacker” (Kenneth); others see creating viruses and hacking as part of “learning about the Internet” (Jack). Lack of moral restraint is what makes them different than others with technical skills; hackers are sometimes described as people as maladjusted individuals who “want to hurt others for no reason.” (Dana) Respondents will describe hackers as ”miserable” people. They feel that hackers do what they do for no good reason, or at least no reason they can understand. Hackers are believed to be lone individuals; while they may have hacker friends, they are not part of any organization. Users with this model often focus on the identity of the hacker. This identity – a young computer geek with poor morals – is much more developed in their mind than the resulting behavior of the hacker. As such, people with this model can usually talk clearly and give examples of who hackers are, but seem less confident in information about the resulting break-ins that happen. These hackers like to break stuff on the computer to create havoc. They will intentionally upload viruses to computers to cause mayhem. Many sub jects believe that hackers intentionally cause computers harm; for example Dana believes that hackers will “fry your hard drive.” (Dana) Hackers might install software to let them control your computer; Jack talked about how a hacker would use his instant messenger to send strange messages to his friends. These mischievous hackers were seen as not targetting specific individuals, but rather choosing random strangers to target. This is much like graffiti; the hackers need a canvas and choose whatever computer they happen to come upon. Because of this, the respondents felt like they might become a victim of this type of hacking at any time. Often, victims like this felt like there wasn’t much they could to do protect themselves from this type of hacking. This was because respondents didn’t understand how hackers were able to break into computers, so they didn’t know what could be done to stop it. This would lead to a feeling of futility; “if they are going to get in, they’re going to get in.” (Hayley) This feeling of futility echoes similar statements discussed by Dourish et al. [12].

2.2.2 Hackers are Burglars Who Break Into Computers for Criminal Purposes
Another set of respondents believe that hackers are criminals that happen to use computers to commit their crimes. Other than the use of the computer, they share a lot in common with other professional criminals: they are motivated by financial gain, and they can do what they do because they lack common morals. They would “break into” computers to look for information much like a burglar will break into houses to look for valuables. The most salient part of this folk model is the behavior of the hacker; the respondents could talk in detail about what the hackers were looking for but spoke very little about the identity of the hacker. Almost exclusively, this criminal activity is some form of identity theft. For example, respondents believe that if a hacker obtains their credit card number, for example, then that hacker can make fraudulent charges with it. But the respondents weren’t always sure what kind of information the hacker was specifically looking for; they just described it as information the hacker could use to make money. Ivan talked about how hackers would look around the computer much like a thief might rummage around in an attic, looking for something useful. Erica used a different metaphor, saying that hackers would “take a digital photo of everything on my computer” and look in it for useful identity information. Usually, the respondents envision the hacker himself using this financial information (as opposed to selling the information to others). Since hackers target information, the respondents believe that computers are not harmed by the break-ins. Hackers look for information, but do not harm the computer. They simply rummage around, “take a digital photo,” possibly install monitoring software, and leave. The computer continues to work as it did before. The main concern of the respondents is how the hacker might use the information that they steal. These hackers choose victims opportunistically; much like a mugger chooses his victims, these hackers will break into any computers they run across to look for valuable information. Or, more accurately, the respondents don’t have a good model of how hackers choose, and believe that there is a decent chance that they will be a victim someday. Gail talks about how hackers are opportunistic, saying “next time I go to their site they’ll nab me.” Hayley believes that they just choose computers to attack without knowing much about who owns them. Respondents with this belief are willing to take steps to protect themselves from hackers to avoid becoming a victim. Gail tries to avoid going websites she’s not familiar with to prevent hackers from discovering her. Jack is careful to always sign out of accounts and websites when he is finished. Hayley shuts off her computer when she isn’t using it so hackers cannot break into it.

2.2.3 Hackers are Criminals who Target Big Fish
Another group of respondents had a conceptually similar model. This group also believes that hackers are Internet criminals who are looking for information to conduct identity theft. However, this group has thought more about how these hackers can best accomplish this goal, and have come to some different conclusions. These respondents believe in “massive hacker groups” (Hayley) and other forms of organization and coordination among criminal hackers. Most tellingly, this group believes that hackers only target the “big fish.” Hackers primarily break into computers of important and rich people in order to maximize their gains. Every respondent who holds this model believes that he or she is not likely to be a victim because he or she is not a big enough fish. They believe that hackers are unlikely to ever target them, and therefore they were safe from hacking. Irving believe that “I’m small potatoes and no one is going to bother me.” They often talk about how other people are more likely targets: “Maybe if I had a lot of money” (Floyd) or “like if I were a bank executive” (Erica). For these respondents, protecting against hackers isn’t a high priority. Mostly they find reasons to trust existing security precautions rather than taking extra steps to protect themselves. For example, Irving talked about how he trusts his pre-installed firewall program to protect him. Both Irving and Floyd trust their passwords to protect them. Basically, their actions indicate that they believe in the speed bump theory: by making it slightly hard for hackers using standard security technologies, hackers will decide it isn’t worthwhile to target them.

2.2.4 Hackers are Contractors Who Support Criminals
Finally, there is a sort of hybrid model of hackers. In this view, hackers the people are very similar to the mischievous graffiti-hackers from above: they are college-age, technically skilled individuals. However, their motivations are more intentional and criminal. These hackers are out to steal personal and financial information from people. Users with this model show evidence of more effort in thinking through their mental model and integrating the various sources of information they have. This model can be seen as a hybrid of the mischievous graffiti-hacker model and the criminal hacker model, integrated into a coherent form by combining the most salient part of the mischievous model (the identity of the hacker) and the most salient part of the criminal model (the criminal activities). Also, everyone who had this model expressed a concern about how hacking works. Kenneth stated that he doesn’t understand how someone can break into a computer without sitting in front of it. Lorna wondered how you can start a program running; she feels you have to be in front of the computer to do that. This indicates that these respondents are actively trying to integrate the information they have about hackers into a coherent model of hacker behavior. Since these hackers are first and foremost young technical people, the respondents believe that these hackers are not likely to be identity thieves. They believe that the hackers are more likely to sell this identity information for others to use. Since the hackers just want to sell information, the respondents reason, they are more likely to target large databases of identity information such as banks or retailers like Respondents with this model believed that hackers weren’t really their problem. Since these hackers tended to target larger institutions like banks or e-commerce websites, their own personal computers weren’t in danger. Therefore, no effort was needed to secure their personal computers. However, all respondents with this model expressed a strong concern for who they do business with online. These respondents would only make purchases or provide personal information to institutions they trusted to get the security right and figure out how to be protected against hackers. These users were highly sensitive to third parties possessing their data.

2.2.5 Multiple Types of Hackers
Some respondents believed that there were multiple types of hackers. Most of the time, these respondents would believe that some hackers are the mischievous graffiti-hackers and that other hackers are criminal hackers (using either the burglar or big fish model, but not both). These respondents would then try to make the effort to protect themselves from both types of hacker threats as necessary. It seems that there is some amount of cognitive dissonance that occurs when respondents hear about both mischievous hackers and criminal hackers. There are two ways that respondents resolve this: the simplest way to resolve this is to believe that some hackers are mischievous and other hackers are criminals, and consequently keep the models separate; a more complicated way is to try to integrate the two models into one coherent belief about hackers. This latter option involves a lot of effort making sense of the new folk model that is not as clear or as commonly shared as the mischievous and criminal models. The ‘contractor’ model of hackers is the result of this integration of the two types of hackers.

Computer security experts have been providing security advice to home computer users for many years now. There are many websites devoted to doling out security advice, and numerous technical support forums where home computer users can ask security-related questions. There has been much effort to simplify security advice so regular computer users can easily understand and follow this advice. However, many home computer users still do not follow this advice. This is evident from the large number of security problems that plague home computers. There is a disagreement among security experts as to why this advice isn’t followed. Some experts seem to believe that home users do not understand the security advice, and therefore more education is needed. Others seem to believe that home users are simply incapable of consistently making good security decisions [10]. However, none of these explanations explain which advice does get followed and which advice does not. The folk models described above begin to provide an explanation of which expert advice home computer users choose to follow, and which advice to ignore. By better understanding why people choose to ignore certain pieces of advice, we can better craft that advice and technologies to have a greater effect. In Table 3, I list 12 common pieces of security advice for home computer users. This advice was collected from the Microsoft Security at Home website {7}, the CERT Home Computer Security website {8}, and the US-CERT Cyber-Security Tips website {9}, and much of this advice is duplicated across websites. This advice represents the distilled wisdom on many computer security experts. This table then summarizes, for each folk model, whether that advice is important to follow, helpful but not essential, or not necessary to follow. To me, the most interesting entries indicate when users believe that a piece of security advice is not necessary to follow (labeled ‘xx’ in the table). These entries show how home computer users apply their folk models to determine for themselves whether a given piece of advice is important. Also interesting are the entries labeled ‘??’; these entries indicate places where users believe that the advice will help with security, but do not see the advice as so important that it must always be followed. Often users will decide that following advice labeled with ‘??’ is too costly in terms of effort or money, and decide to ignore it. Advice labeled ‘!!’ is extremely important, and the respondents feel that it should never be ignored, even if following it is inconvenient, costly, or difficult.

{7}, retrieved July 5, 2009
{8}, retrieved July 5, 2009
{9}, retrieved July 5, 2009

3.1 Anti-Virus Use
Advice 1–3 has to do with anti-virus technology: Advice #1 states that anti-virus software should be used; #2 states that the virus signatures need to be constantly updated to be able to detect current viruses; and #3 states that the anti-virus software should regularly scan a computer to detect viruses. All of these are best practices for using anti-virus software. Respondents mostly use their folk models of viruses to make decisions about anti-virus use, for obvious reasons. Respondents who believe that viruses are just buggy software also believe it is not necessary to run anti-virus. They think they can keep viruses off of their computer by controlling what gets installed on their computer; they believe viruses need to be executed manually to infect a computer, and if they never execute one then they don’t need anti-virus. Respondents with the under-developed folk model of viruses, who refer to viruses as generically ‘bad,’ also do not use anti- virus software. These people understand that viruses are harmful and that anti-virus software can stop them. However, they have never really thought about specific harms a virus might cause to them. Lacking an understanding of the threats and potential harm, they generally find it unnecessary to exert the effort to follow the best practices around anti-virus software. Finally, one group of respondents believe that anti-virus software can help stop hackers. Users with the burglar model of hackers believe that regular anti-virus scans can be important because these burglar-hackers will sometimes install viruses to collect personal information. Regular anti-virus use can help detect these hackers.

3.2 Other Security Software
Advice #4 concerns other types of security software; home computer users should run a firewall or more comprehensive Internet security suite. I think that most of the respondents didn’t understand what this security software did, other than a general notion of providing “security.” As such, no one included security software as an important component of their mental model. Respondents who held the graffiti-hacker or burglar-hacker models believed that this software must help with hackers somehow, even though they don’t know how, and would suggest installing it. But since they do not understand how it works, they do not consider it of vital importance. This highlights an opportunity for home user education; if these respondents better understood how security software helps protect against hackers, they might be more interested in using it and maintaining it. One interesting belief about this software comes from the respondents who believe hackers only go after big fish. For these respondents, security software can serve as a speed-bump that discourages hackers from casually breaking into their computer. For these people, they don’t care exactly how it works as long as it does something.

3.3 Email Security
Advice #5 is the only piece of advice about email on my list. It states that you shouldn’t open attachments from people you don’t recognize. Everyone in my sample was familiar with this advice and had taken it to heart. Everyone believed that viruses can be transmitted through email attachments, and therefore not clicking on unknown attachments can help prevent viruses.

3.4 Web Browsing
Advice 6-9 all deal with security behaviors while browsing the web. Advice #6 states that users need to ensure that they only download and run programs from trustworthy sources. Many types of malware are spread through downloads. #7 states that users should only browse web-pages from trustworthy sources. There are many types of malicious websites such as phishing websites, and some websites can spread malware simply by visiting the site and executing the javascript on the website. #8 states that users should disable scripting like Java and JavaScript in their web browsers. Often there are vulnerabilities in these scripts, and some malware uses these vulnerabilities to spread. And #9 suggests using good passwords so attackers cannot guess their way into your accounts. Overall, many respondents would agree with most of this advice. However, no one seemed to understand the advice about web scripts; indeed, no one seemed to even understand what a web script was. Advice #8 was largely ignored because it wasn’t understood. Everyone understood the need for care in choosing what to download. Downloads were strongly associated with viruses in most respondents’ minds. However, only users with well- developed models of viruses (the Mischief and Support Crime models) believed that viruses can be “caught” simply by browsing web pages. People who believed that viruses were buggy software didn’t see browsing as dangerous because they weren’t actively clicking on anything to run it. While all of the respondents expressed some knowledge of the importance of passwords, few exerted extra effort to make good passwords. Everyone understood that, in general, passwords are important, but they couldn’t explain why. Respondents with the graffiti hacker model would sometimes put extra effort into their passwords so that mischievous hackers couldn’t mess up their accounts. And respondents who believed that hackers only target big fish thought that passwords could be an effective speed bump to prevent hackers from casually targeting them. Respondents who believed in hackers as contractors to criminals uniformly believed that they were not targets of hackers and were therefore safe. However, they were careful in choosing which websites to do business with. Since these hackers targeted web businesses with lots of personal or financial information, it is important to only do business with websites that are trusted to be secure.

3.5 Computer Maintenance
Finally, Advice 10-12 concerns computer maintenance. Advice #10 suggests that users make regular backups in case some of their data is lost or corrupted. This is good advice for both security and non-security reasons. #11 states that it is important to keep the system patched with the latest updates to protect against known vulnerabilities that hackers and viruses can exploit. And #12 echoes the old maxim that the most secure machine is one that is turned off. Different models had dramatically different suggestions as to which types of maintenance are important. For example, mischievous viruses and graffiti hackers can cause data loss, so users with those models feel that backups are very important. But users who believe in more criminal viruses and hackers don’t feel that backups are necessary; hackers and viruses steal information but don’t delete it. Patching is an important piece of advice, since hackers and viruses need vulnerabilities to exploit. Most respondents only experience patches through the automatic updates feature in their operating system or applications. Respondents mostly associated the patching advice with hackers; respondents who felt that they would be a target of hackers also felt that patching was an import tool to stop hackers. Respondents who believed that viruses are buggy software feel that viruses also bring out more bugs in other software on the computer; patching the other software makes it more difficult for viruses to cause problems.

This study was inspired by the recent rise of botnets as a strategy for malicious attackers. Understanding the folk models that home computer users employ in making security decisions sheds light on why botnets are so successful. Modern botnet software seems designed to take advantage of gaps and security weaknesses in multiple folk models. I begin by listed a number of stylized facts about botnets. These facts are not true about all botnets and botnet software, but these facts are true about many of the recent and large botnets.

1. Botnets attack third parties. When botnet viruses compromise a machine, that machine only serves as a worker. That machine is not the end goal of the attacker. The owner of the botnet intends to use that machine (and many others) to cause problems for third parties.
2. Botnets only want the Internet connection The only thing the botnet wants on the victim computer is the Internet connection. Botnet software rarely takes up much space on the hard drive, rarely looks at existing data on the hard drive, rarely occupies much memory, and usually don’t use much CPU. Nothing that makes the computer unique is important.
3. Botnets don’t directly harm the host computer. Most botnet software, once installed, does not directly cause harm to the machine it is running on. It consumes resources, but often botnet software is configured to only use the resources at times they are otherwise unused (like running in the middle of the night). Some botnets even install patches and software updates so that other botnets cannot also use the computer.
4. Botnets spread automatically through vulnerabilities. Botnets often spread through automated compromises. They automatically scan the internet, compromise any vulnerable computers, and install copies of the botnet software on the compromised computers. No human intervention is required; neither the attacker nor the zombie owner nor the vulnerable computer owner need to be sitting at their computer at the time.
These stylized facts about botnets are not true for all botnets, but hold for many of the current, large, well-known, and well-studies botnets. I believe that botnet software effectively takes advantage of the limited and incomplete nature of the folk models of home computer users. Table 4 illustrates how each model does or does not incorporate the possibility of each of the stylized facts about botnets.

Botnets attack third parties.
None of the hacker models would predict that compromises would be used to attack third parties. Respondents who held both the Big Fish mental model and the Contractor mental model believe that, since hackers don’t want anything on the computer, they would target other computers and leave the unwanted computer alone. Respondents with the Burglar model believe that they might be a target, but only because the hacker wants something that might be on their computer. They would believe that once the hacker either finds what they were looking for, or cannot find anything interesting, then the hacker would leave. Respondents with the Graffiti model believe that hacking and vandalizing the computer is the end goal; it would never cross their mind to then use that computer to attack third parties. None of the respondents used their virus models to discuss potential third parties either. A couple of respondents with the Viruses are Bad model mentioned that once they got a virus, it might try to “spread.” However, they had no idea how this spreading might happen. Spreading is a form of harm to third parties; however, it is not the coordinated and intentional harm that botnets cause. Respondents who employed the other three virus models never mentioned the possibility of spreading beyond their computers. They were mostly focused on what the virus would do to them, and not to how it might affect others. Also, since they had an idea of how viruses spread, those ideas only involved spreading through webpages and email. They don’t run a webpage on their computer, and no one acknowledged that a virus could use their email to send copies out.

Botnets only want the Internet connection.
No one in this study could conceive of a hacker or virus that only wanted the Internet connection of their computer. The three crime-based hacker models (Burglar, Big Fish, and Contractor ) all believe that hackers are actively looking for something stored on the computer. All the respondents with these three models believed that their computer had (or might have) some specific and unique information that hackers wanted. Respondents with the Graffiti model believed that computers are a sort of canvas for digital mischief. I would guess that they might believe that botnet owners would only want the Internet connection; they believe there is nothing unique about their computer that makes hackers want to do digitial graffiti on their computer. None of the virus models would have anything to say about this fact. Respondents with the Viruses are Bad model and the Buggy Software models didn’t attribute any intentionality to viruses. Respondents with the Mischief and Support Crime models believed viruses were created for a reason, but didn’t seem to think about how using the computer to spread.

Botnets don’t harm the host computer.
This is the one stylized fact on this list that any respondents explicitly mentioned. Respondents with the Supports Crime model believe that viruses might try to hide on the computer and not display any outward signs of their presence. Respondents who employ one of the other three virus models would find this strange; to them, viruses always create visible effects. To users with the Mischief model, these visible effects are the main point of the virus! Additionally, the three folk models of hackers that relate to crime all include the idea that a ‘break in’ by hackers might not harm the computer. To these respondents, since hackers are just looking for information, they don’t necessarily want to harm the computer. Respondents who use the Graffiti model would find compromises that don’t harm the computer to be strange, as the main purpose of ‘breaking into’ computers is to vandalize them.

Botnets spread automatically.
The idea that botnets spread without human intervention would be strange to most of the respondents. Almost all of the respondents believed that hackers had to be sitting in front of some computer somewhere when they were “breaking into” computers. Indeed, two of the respondents even asked the interviewer how it was possible to use a computer without being in front of it. Most respondents belived that viruses generally also required some form of human intervention in order to spread. Viruses could be ‘caught’ by visiting webpages, by down- loading software, or by clicking on emails. But all of those required someone to actively use the computer. Only one subject explicitly mentioned that viruses can “just happen” (Jack). Respondents with the Viruses are Bad model understood that viruses could spread, but didn’t know how. These respondents might not be surprised to learn that viruses can spread without human intervention, but probably haven’t thought about it enough for that fact to be salient.

Botnets are extremely cleverly designed. They take advantage of home computer users by operating in a very different manor from the one conceived of by the respondents in this study. The only stylized fact listed above that a decent number of my respondents would recognize as a property of attacks is that botnets don’t cause harm to the host computer. And not everyone in the study would believe this; some respondents had a mental model where not harming the computer wouldn’t make sense. This analysis illustrates why eliminating botnets is so difficult. Many home computer users probably have similar folk models to the ones possessed by the respondents in this study. If so, botnets look very different from the threats envisioned by many home computer users. Since home computer users do not see this as a potential threat, they do not take appropriate steps to protect themselves.

Home computer users conceptualize security threats in multiple ways; consequently, users make different decisions based on their conceptualization. In my interviews, I found four distinct ways of thinking about malicious software as a security threat: the ‘viruses are bad,’ ‘buggy software,’ ‘viruses cause mischief,’ and ‘viruses support crime’ models. I also found four more distinct ways of thinking about malicious computer users as a threat: thinking of malicious others as ‘graffiti artists,’ ‘burglars,’ ‘internet criminals who target big fish,’ and ‘contractors to organized crime.’ I did not use a generalizable sampling method. I am able to describe a number of different folk models, but I cannot estimate how prevalent each model is in the population. Such estimates would be useful in understanding nationwide vulnerability, but I leave these estimates to future work. I also cannot say if my list of folk models is exhaustive — there may be more models than I describe — but it does represent the opinions of a variety of home computer users. Indeed, the snowball sampling method increases the chances that I will interview users with similar folk model despite the demographic heterogeneity of my sample. Previous literature [12, 15] was able to describe some basic security beliefs held by non-technical users; I provide structure to these theories by understanding how home computer users group these into semi-coherent mental models in their mind. My primary contribution with this study is an understanding of why users strictly follow some security advice from computer security experts and ignore other advice. This illustrates one major problem with security education efforts: they do not adequately explain the threats that home computer users face; rather they focus on practical, actionable advice. But without an understanding of threats, home computer users intentionally choose to ignore advice that they don’t believe will help them. Security education efforts should focus not only on recommending what actions to take, but also emphasize why those actions are necessary. Following the advice of Kempton [19], security experts should not evaluate these folk models on the basis of correctness, but rather on how well they meet the needs of the folk that possess them. Likewise, when designing new security technologies, we should not attempt to force users into a more ‘correct’ mental model; rather, we should design technologies that encourage users with limited folk models to be more secure. Effective security technologies need to protect the user from attacks, but also expose potential threats to the user in a way the user understands so that he or she is motivated to use the technology appropriately.

I appreciate the many comments and help during the whole pro ject from Jeff MacKie-Mason, Judy Olson, Mark Ackerman, and Brian Noble. Tiffany Vienot was also extremely helpful in helping me explain my methodology clearly. This material is based upon work supported by the National Science Foundation under Grant No. CNS 0716196.

[1] A. Adams and M. A. Sasse. Users are not the enemy. Communications of the ACM, 42(12):40–46, December 1999.
[2] R. Anderson. Why cryptosystems fail. In CCS ’93: Proceedings of the 1st ACM conference on Computer and communications security, pages 215–227. ACM Press, 1993.
[3] F. Asgharpour, D. Liu, and L. J. Camp. Mental models of computer security risks. In Workshop on the Economics of Information Security (WEIS), 2007.
[4] P. Bacher, T. Holz, M. Kotter, and G. Wicherski. Know your enemy: Tracking botnets. from the Honeynet Pro ject, March 2005.
[5] P. Barford and V. Yegneswaran. An inside look at botnets. In Special Workshop on Malware Detection, Advances in Information Security. Springer-Verlag, 2006.
[6] J. L. Camp. Mental models of privacy and security. Available at, August 2006.
[7] L. J. Camp and C. Wolfram. Pricing security. In Proceedings of the Information Survivability Workshop, 2000.
[8] A. Collins and D. Gentner. How people construct mental models. In D. Holland and N. Quinn, editors, Cultural Models in Language and Thought. Cambridge University Press, 1987.
[9] R. Contu and M. Cheung. Market share: Security market, worldwide 2008. Gartner Report:
, June 2009.
[10] L. F. Cranor. A framework for reasoning about the human in the loop. In Usability, Psychology, and Security Workshop. USENIX, 2008.
[11] R. D’Andrade. The Development of Cognitive Anthropology. Cambridge University Press, 2005.
[12] P. Dourish, R. Grinter, J. D. de la Flor, and M. Joseph. Security in the wild: User strategies for managing security as an everyday, practical problem. Personal and Ubiquitous Computing, 8(6):391–401, November 2004.
[13] D. M. Downs, I. Adema j, and A. M. Schuck. Internet security: Who is leaving the ’virtual door’ open and why? First Monday, 14(1-5), January 2009.
[14] R. E. Grinter, W. K. Edwards, M. W. Newman, and N. Ducheneaut. The work to make a home network work. In Proceedings of the 9th European Conference on Computer Supported Cooperative Work (ECSCW ’05), pages 469–488, September 2005.
[15] J. Gross and M. B. Rosson. Looking for trouble: Understanding end user security management. In Symposium on Computer Human Interaction for the Management of Information Technology (CHIMIT), 2007.
[16] C. Herley. So long, and no thanks for all the externalities: The rational rejection of security advice by users. In Proceedings of the New Security Paradigms Workshop (NSPW), September 2009.
[17] P. Johnson-Laird, V. Girotto, , and P. Legrenzi. Mental models: a gentle guide for outsiders. Available at, 1998.
[18] P. N. Johnson-Laird. Mental models in cognitive science. Cognitive Science: A Multidisciplinary Journal, 4(1):71–115, 1980.
[19] W. Kempton. Two theories of home heat control. Cognitive Science: A Multidisciplinary Journal, 10(1):75–90, 1986.
[20] A. J. Kuzel. Sampling in qualitative inquiry. In B. Crabtree and W. L. Miller, editors, Doing Qualitative Research, chapter 2, pages 31–44. Sage Publications, Inc., 1992.
[21] J. Markoff. Attack of the zombie computers is a growing threat, experts say. New York Times, January 7 2007.
[22] D. Medin, N. Ross, S. Atran, D. Cox, J. Coley, J. Proffitt, and S. Blok. Folkbiology of freshwater fish. Cognition, 99(3):237–273, April 2006.
[23] M. B. Miles and M. Huberman. Qualitative Data Analysis: An Expanded Sourcebook. Sage Publications, Inc., 2nd edition edition, 1994. MilesHuberman1994.
[24] A. J. Onwuegbuzie and N. L. Leech. Validity and qualitative research: An oxymoron? Quality and Quantity, 41:233–249, 2007.
[25] D. Russell, S. Card, P. Pirolli, and M. Stefik. The cost structure of sensemaking. In Proceedings of the INTERACT ’93 and CHI ’93 conference on Human factors in computing system, 1993.
[26] Trend Micro. Taxonomy of botnet threats. Whitepaper, November 2006.

This appendix contains samples of data matrix displays that were developed during the data analysis phase of this project.

Rick Wash
email : wash [at] msu [edu] edu


Q: Why can I sometimes see about:blank and/or wyciwyg: entries? What scripts are causing this?
A:   about:blank is the common URL designating empty (newly created) web documents. A script can “live” there only if it has been injected (with document.write() or DOM manipulation, for instance) by another script which must have its own permissions to run. It usually happens when a master page creates (or statically contains) an empty sub-frame (automatically addressed as about:blank) and then populates it using scripting. Hence, if the master page is not allowed, no script can be placed inside the about:blank empty page and its “allowed” privileges will be void. Given the above, risks in keeping about:blank allowed should be very low, if any. Moreover, some Firefox extensions need it to be allowed for scripting in order to work. Sometimes, especially on partially allowed sites, you may see also a wyciwyg: entry. It stands for “What You Cache Is What You Get”, and identifies pages whose content is generated by JavaScript code through functions likedocument.write(). If you can see such an entry, you already allowed the script generating it, hence the aboveabout:blank trust discussion applies to this situation as well.

Q: Why should I allow JavaScript, Java, Flash and plugin execution only for trusted sites?
A:   JavaScriptJava and Flash, even being very different technologies, do have one thing in common: they execute on your computer code coming from a remote site. All the three implement some kind of sandbox model, limiting the activities remote code can perform: e.g., sandboxed code shouldn’t read/write your local hard disk nor interact with the underlying operating system or external applications. Even if the sandboxes were bullet proof (not the case, read below) and even if you or your operating system wrap the whole browser with another sandbox (e.g. IE7+ on Vista or Sandboxie), the mere ability of running sandboxed code inside the browser can be exploited for malicious purposes, e.g. to steal important information you store or enter on the web (credit card numbers, email credentials and so on) or to “impersonate” you, e.g. in fake financial transactions, launching “cloud” attacks like Cross Site Scripting (XSS) or CSRF, with no need for escaping your browser or gaining privileges higher than a normal web page. This alone is enough reason to allow scripting on trusted sites only. Moreover, many security exploits are aimed to achieve a “privilege escalation”, i.e. exploiting an implementation error of the sandbox to acquire greater privileges and perform nasty task like installing trojans, rootkits and keyloggers.

This kind of attack can target JavaScript, Java, Flash and other plugins as well:

  1. JavaScript looks like a very precious tool for bad guys: most of the fixed browser-exploitable vulnerabilities discovered to date were ineffective if JavaScript was disabled. Maybe the reason is that scripts are easier to test and search for holes, even if you’re a newbie hacker: everybody and his brother believe to be a JavaScript programmer :P
  2. Java has a better history, at least in its “standard” incarnation, the Sun JVM. There have been viruses, instead, written for the Microsoft JVM, like the ByteVerifier.Trojan. Anyway, the Java security model allows signed applets (applets whose integrity and origin are guaranteed by a digital certificate) to run with local privileges, i.e. just like they were regular installed applications. This, combined with the fact there are always users who, in front of a warning like “This applet is signed with a bad/fake certificate. You DON’T want to execute it! Are you so mad to execute it, instead? [Never!] [Nope] [No] [Maybe]”, will search, find and hit the “Yes” button, caused some bad reputation even to Firefox (notice that the article is quite lame, but as you can imagine had much echo).
  3. Flash used to be considered relatively safe, but since its usage became so widespread severe security flaws have been found at higher rate. Flash applets have also been exploited to launch XSS attacksagainst the sites where they’re hosted.
  4. Other plugins are harder to exploit, because most of them don’t host a virtual machine like Java and Flash do, but they can still expose holes like buffer overruns that may execute arbitrary code when fed with a specially crafted content. Recently we have seen several of these plugin vulnerabilities, affecting Acrobat Reader, Quicktime, RealPlayer and other multimedia helpers.

Please notice that none of the aforementioned technologies is usually (95% of the time) affected by publicly known and still unpatched exploitable problems, but the point of NoScript is just this: preventing exploitation of even unknown yet security holes, because when they are discovered it may be too late ;) The most effective way is disabling the potential threat on untrusted sites.

Q:  What is a trusted site?
A:  A “trusted site” is a site whose owner is well identifiable and reachable, so I have someone to sue if he hosts malicious code which damages or steals my data.* If a site qualifies as “trusted”, there’s no reason why I shouldn’t allow JavaScript, Java or Flash. If some content is annoying, I can disable it with AdBlock. What I’d like to stress here is that “trust” is not necessarily a technical matter. Many online banking sites require JavaScript and/or Java, even in contexts where these technologies are absolutely useless and abused: for more than 2 years I’ve been asking my bank to correct a very stupid JavaScript bug preventing login from working with Firefox. I worked around this bug writing an ad hoc bookmarklet, but I’m not sure the average Joe user could.

So, should I trust their mediocre programmers for my security? Anyway, if something nasty happens with my online bank account because it’s unsafe, I’ll sue them to death (or better, I’ll let the world know) until they refund me. So you may say “trust” equals “accountability”. If you’re more on the technical side and you want to examine the JavaScript source code before allowing, you can help yourself with JSView.

* You may ask, what if site I really trust gets compromised? Will I get infected as well because I’ve got it in my whitelist, ending to sue as you said? No, you won’t, most probably. When a respectable site gets compromised, 99.9% of the times malicious scripts are still hosted on a different domain which is likely not in your whitelist, and gets just included by the pages you trust. Since NoScript blocks 3rd party scripts which have not been explicitly whitelisted themselves, you’re still safe, with the additional benefit of an early warning :)


Freedom Box is the name we give to a free software system built to keep your communications free and private whether chatting with friends or protesting in the street. Freedom Box software is particularly tailored to run in “plug servers,” which are compact computers that are no larger than the power adapters for electronic appliances. Located in people’s homes or offices such inexpensive servers can provide privacy in normal life, and safe communications for people seeking to preserve their freedom in oppressive regimes.

Why Freedom Box?
Because social networking and digital communications technologies are now critical to people fighting to make freedom in their societies or simply trying to preserve their privacy where the Web and other parts of the Net are intensively surveilled by profit-seekers and government agencies. Yet, instead of technology supporting these new modes of communications, smartphones, mobile tablets, and other common forms of consumer electronics are being built as “platforms” to control their users and monitor their activity. Freedom Box exists to provide people with privacy-respecting technology alternatives that enable normal communication in normal times, and that offer ways to collaborate safely and securely with others in building social networks of protest, demonstration, and mobilization for political change in the not-so-normal times. Imagine if your next wireless router, or settop box, or other small computing device came with extra features. It knew how to securely contact your friends and business associates, it stored your personal data, securely backing it up and generally maintaining your presence in the various networks you have come to rely on for communicating with people in society. Such a box would not only make your participation in network communication easier in your daily life, increasing your privacy and the security of computers in your life, it would have many unique advantages during times of crisis.

Such a box could help in disasters by creating a mesh network with your neighbors to replace the centralized internet connections that go out with the lights or are cut by hostile governments. Such a box would make it harder for governments and invasive corporate interests to reach your data and casually profile you for their own uses. Such a box would also let you lend aid to friends in need by sharing your unfettered internet access with those trapped behind government firewalls that prevent them from learning about the world or speaking plainly to it. Such boxes exist in the form of plug computers and mesh routers, tiny, inexpensive machines that can take the place of other electronics in your life, that draw so little power (often as little as 5W) that they can be run off of batteries or solar panels. We even have free software, software meant to empower and support individuals, to do all of the things mentioned above.

What we need is the glue to hold all of that together, the architecture of which pieces stack together in which way to turn a collection of possibilities into an appliance so easy to use that you forget you even have one, at least until that moment when you really need it. The FreedomBox Foundation was built to put this all together. It was started by community leaders with long track leaders and lives as a community project. But the past few months have shown us all that there are millions of people around the world who need such a device now and we need to pick up the pace and get them made so that next time, our friends have some help. That is why we are asking for your help.

In Need of Community Angels
There are many people out there, in many different communities, who feel the same way we do about profiling, internet kill switches, and the need to give people greater independence in their network communications, but turning all that interest and the offers of help into a real software suite is going to take coordination, organization, and bringing people together in focused groups to get this system built. That is why the FreedomBox Foundation was created, but it requires real work and a real demonstration of community size and support to keep everything moving. If we can meet our funding goal now, we can start doing that work full time, build road maps for the core components, and put together a series of conferences/hack days to pull the community together. The Freedom Box is a community based endeavor from the ground up. That includes everything from architecture and engineering through to administration and funding. It’s the reason why we’re seeking the first round of investment from our own community. We want it to be clear that this project begins and remains in the hands of the people who give it life at every stage and in every part of the project. Almost all the software we need to make this work is already out there in the free software world, but if we are going to pull it all together, first we need to get up to speed. Please join us and help keep the momentum going from 0 to 60!

When will we get the software?
The release timeline for this software depends a good deal on how much support we can gather this month, which is why we are reaching out now. If we can reach our goal, we hope to release a first version of the software six months later. This is our best working etimate but we will know a lot more in the next 30 days and will continue to update everyone here and on the Foundation’s website. If you are pledging enough for any of the software rewards ($50 and up), know that we hope to have everything shipped out within in week of the 0.1 release, if not on the release day itself, and please keep your eye here and on the Foundation site for updates.

What will Freedom Boxes do?

A plug server or other digital appliance in your home running the Freedom Box software can provide many services to you and your friends, automatically and securely. The following is a short list of the services we think are important:

  • Safe social networking, in which, without losing touch with any of your friends, you replace Facebook, Flickr, Twitter and other centralized services with privacy-respecting federated services;
  • Secure backup: Your data automatically stored in encrypted format on the Freedom Boxes of your friends or associates, thus protecting your personal data against seizure or loss;
  • Network neutrality protection: If your ISP starts limiting or interfering with your access to services in the Net, your Freedom Box can communicate with your friends to detect and route traffic around the limitations. Network censorship is automatically routed around, for your friends in societies with oppressive national firewalls, or for you;
  • Safe anonymous publication: Friends or associates outside zones of network censorship can automatically forward information from people within them, enabling safe, anonymous publication;
  • Home network security, with real protection against intrusion and the security threats aimed at Microsoft Windows or other risky computers your network;
  • Encrypted email, with seamless encryption and decryption;
  • Private voice communications: Freedom Box users can make voice-over-Internet phone calls to one another or to any phone. Calls between Freedom Box users will be encrypted securely;

Freedom Boxes can do anything that computers running the Debian GNU/Linux free operating system can do, which means they have full access to thousands of applications packages. Freedom Boxes are Debian server systems specially configured to provide users with privacy-protection and safe communications services. Freedom Boxes will become more capable with time, because they can upgrade themselves safely and securely using well-tested and stable automatic upgrade mechanisms already deployed in hundreds of thousands of Debian and Debian-descended installations around the world.

Freeing the Internet one Server at a time
by Steven J. Vaughan-Nichols / February 16, 2011

Free software isn’t about free services or beer, it’s about intellectual freedom. As recent episodes such as censorship in China, the Egyptian government turning off the Internet, andFacebook’s constant spying, have shown, freedom and privacy on the Internet are under constant assault. Now Eben Moglen, law professor at Columbia University and renowned free software legal expert, has proposed a way to combine free software with the original peer-to-peer (P2P) design of the Internet to liberate users from the control of governments and big brother-like companies: Freedom Box.

In a recent Freedom in the Clouds speech in NYC, Moglen explained what he sees as the Internet’s current problems and his proposed solution. First, here’s the trouble with the Internet today as Moglen sees it:

[6:13] “It begins of course with the Internet. Designed as a network of peers without any intrinsic need for hierarchical or structural control and assuming that every switch in the net is an independent free standing entity who’s volition is equivalent to the human beings who control it … But it never really worked out that way.”

The Software Problem [7:18]: “It was a simple software problem and it has a simple three syllable name. Its name was ‘Microsoft’. Conceptually there was a network which was designed as a system of peer nodes, but the operating software … that came to occupy the network over the course of a decade-and-a-half was built around a very clear idea that had nothing to do with peers. It was called ’server/client architecture’.”

The Great Idea Behind Windows [9:22]: “It was the great idea of Windows, in an odd way, to create a political archetype in the net that reduced the human being to the client, and created a big centralized computer, which we might refer to as the server, that provided things to the human being on ‘take or it leave it’ terms. And unfortunately everyone took it because they didn’t know how to leave once they got in. Now, the net was made up of servers in the center and clients at the edge. Clients had quite a little power and servers had quite a lot … As storage gets cheaper, as processing gets cheaper, as complex services that scale in ways that are hard to use small computers for … the hierarchical nature of net came to seem like it was meant to be there.”

Logs [10:44]: “One more thing happened about that time … Servers began to keep logs. That’s good decision … But if you have a system which centralizes servers, and the servers centralize their logs, then you are creating vast repositories of hierarchically organized data about people at the edges of the network that they do not control, and unless they are experienced in the operation of servers, will not understand the comprehensiveness of [server-collected user data.].”

The Recipe for Disaster [12:01]: “So we built a network out of a communications architecture designed for peering, which we defined in client server style, which we then defined to be the dis-empowered client at the edge and the server in the middle. We aggregated processing and storage increasingly in the middle and we kept the logs — that is information about the flows of information in the net — in centralized places far from the human beings who controlled or at any rate thought they controlled

This ended up creating “an architecture that was very subject to misuse, indeed it was begging to be misused. Now we are getting the misuse we set up…There are a lot of reasons for making clients dis-empowered … There are many overlapping rights owners, as they see themselves, each of whom has a stake in dis-empowering a client at the edge of the network. To prevent particular hardware from being moved from one network to another, to prevent particular hardware from playing music not bought at the monopoly of music in the sky.”

In particular, Moglen has no love at all for Facebook. “The human race has susceptibility to harm but Mr. Zuckerberg has attained an unenviable record. He has done more harm to the human race than anybody else his age. Because he harnessed Friday night, that is, ‘Everybody needs to get laid,’ and turned into a structure for degenerating the integrity of human personality and he has to remarkable extent succeeded with a very poor deal, namely ‘I will give you free web-hosting and some PHP doodads and you get spying for free all the time.’ And it works.

How could that have happened? There was no architectural reason. Facebook is the web with, ‘I keep all the logs, how do you feel about that?’ It’s a terrarium for what it feels like to live in a Panopticon built out of web parts. And it shouldn’t be allowed. That’s a very poor way to deliver those services. They are grossly overpriced at ’spying all the time’, they are not technically innovative. They depend on an architecture subject to misuse and the business model that supports them is misuse. There isn’t any other business model for them. This is bad. I’m not suggesting it should be illegal. It should be obsolete. We’re technologists we should fix it.”

So, what’s the solution to this client/server architecture and all the abuses against freedom and privacy it enables? Moglen turns to inexpensive server hardware. He told the New York Times that “cheap, small, low-power plug servers,” are the start. These are small devices “the size of a cellphone charger, running on a low-power chip. You plug it into the wall and forget about it.” Almost anyone could have one of these tiny servers, which are now produced for limited purposes but could be adapted to a full range of Internet applications, he said. “They will get very cheap, very quick,” he continued, “They’re $99; they will go to $69. Once everyone is getting them, they will cost $29.”

Such plug-in servers are already shipping. They include the TonidoPlug, theSheevaPlug, and GuruPlug.

The point of these Freedom servers is to address the privacy and control issues of “social networking and digital communications technologies, [which] are now critical to people fighting to make freedom in their societies or simply trying to preserve their privacy where the Web and other parts of the Net are intensively surveilled by profit-seekers and government agencies.” This needs to be done “Because smartphones, mobile tablets, and other common forms of consumer electronics are being built as ‘platforms’ to control their users and monitor their activity.”

What runs on these plug servers is where Linux and open-source software comes in. The one firm software decision that’s been made so far is that the base operating system will be the latest release of Debian Linux This version of Debian is the one that, for better or worse, contains no proprietary hardware drivers or software.
You say you want a revolution?
by Dan Goodin / 17th February 2011

Concerned about Facebook, Google, and other companies that make billions brokering sensitive information, free-software champion Eben Moglen has unveiled a plan to populate the internet with tiny, low-cost boxes that are designed to preserve individuals’ personal privacy. The Freedom Box, as the chairman of the Software Freedom Law Center has christened it, would be no bigger than power adapters for electronic appliances. The inexpensive devices would be deployed in a peer-to-peer fashion in homes and offices to process email, voice-over-IP communications, and the sharing of pictures, among other things. The decentralized structure of the devices is in stark contrast to today’s biggest internet providers, which offer the same services in exchange for users turning over some of their most trusted secrets. Public enemy No. 1 is Facebook founder Mark Zuckerberg, who in Moglen’s eyes, “has done more harm to the human race than anybody else his age.”

“He has to remarkable extent succeeded with a very poor deal, namely ‘I will give you free web-hosting and some PHP doodads and you get spying for free all the time,’” Moglen said during a meeting last year of the Internet Society’s New York branch. “And it works.” As Moglen envisions them, Freedom Boxes would be used to perform a wealth of services that most of the world has been brainwashed into believing are better performed in the cloud. Secure backups that automatically store data in encrypted form would be performed on the Freedom Boxes of our friends, just as their encrypted data would be stored on ours. The boxes would also be used to send and receive encrypted email, VoIP calls, and to act as a safer alternative to social-networking sites such as Facebook and LinkedIn. The guts of the boxes would be the Debian distribution of Linux, along with countless free applications that would presumably be developed under the same model as most of today’s open source software.

The Freedom Box website gives no timeline for delivery, but Moglen told The New York Times that he could build version 1.0 in one year if he could raise “slightly north of $500,000.” The cost of plug-in devices is about $99 right now, but Moglen said they’ll eventually sell for about $29. They’ll run on a low-power chip. “You plug it into the wall and forget about it,” he told the NYT.

With Facebook and Twitter getting credit for fomenting protests and revolutions in the Middle East, Moglen says the ability to connect online carries immeasurable promise. But right now, most of the organizing is taking place on centralized, for-profit websites with ethics that can easily be compromised. “As a result of which, we are watching political movements of enormous value, capable of transforming the lives of hundreds of millions of people, resting on a fragile basis, like, for example, the courage of Mr. Zuckerberg, or the willingness of Google to resist the state, where the state is a powerful business partner and a party Google cannot afford frequently to insult.”

Eben Moglen
email : moglen [at] columbia [dot] edu

Software Freedom, Privacy, and Security for Web 2.0 and Cloud Computing
A Speech given by Eben Moglen at a meeting of the Internet Society’s New York branch on Feb 5, 2010

It’s a pleasure to be here. I would love to think that the reason that we’re all here on a Friday night is that my speeches are so good. I actually have no idea why we’re all here on a Friday night but I’m very grateful for the invitation. I am the person who had no date tonight so it was particularly convenient that I was invited for now.

So, of course, I didn’t have any date tonight. Everybody knows that. My calendar’s on the web.

The problem is that problem. Our calendar is on the web. Our location is on the web. You have a cell phone and you have a cell phone network provider and if your cell phone network provider is Sprint then we can tell you that several million times last year, somebody who has a law enforcement ID card in his pocket somewhere went to the Sprint website and asked for the realtime location of somebody with a telephone number and was given it. Several million times. Just like that. We know that because Sprint admits that they have a website where anybody with a law enforcement ID can go and find the realtime location of anybody with a Sprint cellphone. We don’t know that about ATT and Verizon because they haven’t told us.

But that’s the only reason we don’t know, because they haven’t told us. That’s a service that you think of as a traditional service – telephony. But the deal that you get with the traditional service called telephony contains a thing you didn’t know, like spying. That’s not a service to you but it’s a service and you get it for free with your service contract for telephony. You get for free the service of advertising with your gmail which means of course there’s another service behind which is untouched by human hands, semantic analysis of your email. I still don’t understand why anybody wants that. I still don’t understand why anybody uses it but people do, including the very sophisticated and thoughtful people in this room.

And you get free email service and some storage which is worth exactly a penny and a half at the current price of storage and you get spying all the time.

And for free, too.

And your calendar is on the Web and everybody can see whether you have a date Friday night and you have a status – “looking” – and you get a service for free, of advertising “single: looking”. Spying with it for free. And it all sort of just grew up that way in a blink of an eye and here we are. What’s that got to do with open source? Well, in fact it doesn’t have anything to do with open source but it has a whole lot to do with free software. Yet, another reason why Stallman was right. It’s the freedom right?

So we need to back up a little bit and figure out where we actually are and how we actually got here and probably even more important, whether we can get out and if so, how? And it isn’t a pretty story, at all. David’s right. I can hardly begin by saying that we won given that spying comes free with everything now. But, we haven’t lost. We’ve just really bamboozled ourselves and we’re going to have to un-bamboozle ourselves really quickly or we’re going to bamboozle other innocent people who didn’t know that we were throwing away their privacy for them forever.

It begins of course with the Internet, which is why it’s really nice to be here talking to the Internet society – a society dedicated to the health, expansion, and theoretical elaboration of a peer-to-peer network called “the Internet” designed as a network of peers without any intrinsic need for hierarchical or structural control and assuming that every switch in the Net is an independent, free-standing entity whose volition is equivalent to the volition of the human beings who want to control it.

That’s the design of the NET, which, whether you’re thinking about it as glued together with IPv4 or that wonderful improvement IPv6 which we will never use apparently, still assumes peer communications.

OF course, it never really really really worked out that way. There was nothing in the technical design to prevent it. Not at any rate in the technical design interconnection of nodes and their communication. There was a software problem. It’s a simple software problem and it has a simple three syllable name. It’s name is Microsoft. Conceptually, there was a network which was designed as a system of peer nodes but the OS which occupied the network in an increasingly – I’ll use the word, they use it about us why can’t I use it back? – viral way over the course of a decade and a half. The software that came to occupy the network was built around a very clear idea that had nothing to do with peers. It was called “server client architecture”.

The idea that the network was a network of peers was hard to perceive after awhile, particularly if you were a, let us say, ordinary human being. That is, not a computer engineer, scientist, or researcher. Not a hacker, not a geek. If you were an ordinary human, it was hard to perceive that the underlying architecture of the Net was meant to be peerage because the OS software with which you interacted very strongly instantiated the idea of the server and client architecture.

In fact, of course, if you think about it, it was even worse than that. The thing called “Windows” was a degenerate version of a thing called “X Windows”. It, too, thought about the world in a server client architecture, but what we would now think of as now backwards. The server was the thing at the human being’s end. That was the basic X Windows conception of the world. it’s served communications with human beings at the end points of the Net to processes located at arbitrary places near the center in the middle, or at the edge of the NET. It was the great idea of Windows in an odd way to create a political archetype in the Net which reduced the human being to the client and produced a big, centralized computer, which we might have called a server, which now provided things to the human being on take-it-or-leave-it terms.

They were, of course, quite take-it or leave-it terms and unfortunately, everybody took it because they didn’t know how to leave once they got in. Now the Net was made of servers in the center and clients at the edge. Clients had rather little power and servers had quite a lot. As storage gets cheaper, as processing gets cheaper, and as complex services that scale in ways that are hard to use small computers for – or at any rate, these aggregated collections of small computers for – the most important of which is search. As services began to populate that net, the hierarchical nature of the Net came to seem like it was meant to be there. The Net was made of servers and clients and the clients were the guys at the edge representing humans and servers were the things in the middle with lots of power and lots of data.

Now, one more thing happened about that time. It didn’t happen in Microsoft Windows computers although it happened in Microsoft Windows servers and it happened more in sensible OSs like Unix and BSD and other ones. Namely, servers kept logs. That’s a good thing to do. Computers ought to keep logs. It’s a very wise decision when creating computer OS software to keep logs. It helps with debugging, makes efficiencies attainable, makes it possible to study the actual operations of computers in the real world. It’s a very good idea.

But if you have a system which centralizes servers and the servers centralize their logs, then you are creating vast repositories of hierarchically organized data about people at the edges of the network that they do not control and, unless they are experienced in the operation of servers, will not understand the comprehensiveness of, the meaningfulness of, will not understand the aggregatability of.

So we built a network out of a communications architecture design for peering which we defined in client-server style, which we then defined to be the dis-empowered client at the edge and the server in the middle. We aggregated processing and storage increasingly in the middle and we kept the logs – that is, info about the flows of info in the Net – in centralized places far from the human beings who controlled or thought they controlled the operation of the computers that increasingly dominated their lives. This was a recipe for disaster.

This was a recipe for disaster. Now, I haven’t mentioned yet the word “cloud” which I was dealt on the top of the deck when I received the news that I was talking here tonight about privacy and the cloud.

I haven’t mentioned the word “cloud” because the word “cloud” doesn’t really mean anything very much. In other words, the disaster we are having is not the catastrophe of the cloud. The disaster we are having is the catastrophe of the way we misunderstood the Net under the assistance of the un-free software that helped us to understand it. What “cloud” means is that servers have ceased to be made of iron. “Cloud” means virtualization of servers has occurred.

So, out here in the dusty edges of the galaxy where we live in dis-empowered clienthood, nothing very much has changed. As you walk inward towards the center of the galaxy, it gets more fuzzy than it used to. We resolve now halo where we used to see actual stars. Servers with switches and buttons you can push and such. Instead, what has happened is that iron no longer represents a single server. Iron is merely a place where servers could be. So “cloud” means servers have gained freedom, freedom to move, freedom to dance, freedom to combine and separate and re-aggregate and do all kinds of tricks. Servers have gained freedom. Clients have gained nothing. Welcome to the cloud.

It’s a minor modification of the recipe for disaster. It improves the operability for systems that control the clients out there who were meant to be peers in a Net made of equal things.

So that’s the architecture of the catastrophe. If you think about it, each step in that architectural revolution: from a network made of peers, to servers that serve the communication with humans, to clients which are programs running on heavy iron, to clients which are the computers that people actually use in a fairly dis-empowered state and servers with a high concentration of power in the Net, to servers as virtual processes running in clouds of iron at the center of an increasingly hot galaxy and the clients are out there in the dusty spiral arms.

All of those decisions architecturally were made without any discussion of the social consequences long-term, part of our general difficulty in talking about the social consequences of technology during the great period of invention of the Internet done by computer scientists who weren’t terribly interested in Sociology, Social Psychology, or, with a few shining exceptions – freedom. So we got an architecture which was very subject to misuse. Indeed, it was in a way begging to be misused and now we are getting the misuse that we set up. Because we have thinned the clients out further and further and further. In fact, we made them mobile. We put them in our pockets and we started strolling around with them.

There are a lot of reasons for making clients dis-empowered and there are even more reasons for dis-empowering the people who own the clients and who might quaintly be thought of the people who ought to control them. If you think for just a moment how many people have an interest in dis-empowering the clients that are the mobile telephones you will see what I mean. There are many overlapping rights owners as they think of themselves each of whom has a stake in dis-empowering a client at the edge of the network to prevent particular hardware from being moved from one network to another. To prevent particular hardware from playing music not bought at the great monopoly of music in the sky. To disable competing video delivery services in new chips I founded myself that won’t run popular video standards, good or bad. There are a lot of business models that are based around mucking with the control over client hardware and software at the edge to deprive the human that has quaintly thought that she purchased it from actually occupying the position that capitalism says owners are always in – that is, of total control.

In fact, what we have as I said a couple of years ago in between appearances here at another NYU function. In fact, what we have are things we call platforms. The word “platform” like the word “cloud” doesn’t inherently mean anything. It’s thrown around a lot in business talk. But, basically what platform means is places you can’t leave. Stuff you’re stuck to. Things that don’t let you off. That’s platforms. And the Net, once it became a hierarchically architected zone with servers in the center and increasingly dis-empowered clients at the edge, becomes the zone of platforms and platform making becomes the order of the day.

Some years ago a very shrewd lawyer who works in the industry said to me “Microsoft was never really a software company. Microsoft was a platform management company”. And I thought Yes, shot through the heart.

So we had a lot of platform managers in a hierarchically organized network and we began to evolve services. “Services” is a complicated word. It’s not meaningless by any means but it’s very tricky to describe it. We use it for a lot of different things. We badly need an analytical taxonomy of “services” as my friend and colleague Philippe Aigrain in Paris pointed out some 2 or 3 years ago. Taxonomies of “services” involve questions of simplicity, complexity, scale, and control.

To take an example, we might define a dichotomy between complex and simple services in which simple services are things that any computer can perform for any other computer if it wants to and complex services are things you can’t do with a computer. You must do with clusters or structures of some computational or administrative complexity. SEARCH is a complex service. Indeed, search is the archetypal complex service. Given the one way nature of links in the Web and other elements in the data architecture we are now living with (that’s another talk, another time) search is not a thing that we can easily distribute. The power in the market of our friends at Google depends entirely on the fact that search is not easily distributed. It is a complex service that must be centrally organized and centrally delivered. It must crawl the web in a unilateral direction, link by link, figuring out where everything is in order to help you find it when you need it. In order to do that, at least so far, we have not evolved good algorithmic and delivery structures for doing it in a decentralized way. So, search becomes an archetypal complex service and it draws onto itself a business model for its monetiztion.

Advertising in the 20th century was a random activity. You threw things out and hoped they worked. Advertising in the 21st century is an exquisitely precise activity. You wait for a guy to want something and then you send him advertisements about what he wants and bingo it works like magic. So of course on the underside of a complex service called search there is a theoretically simple service called advertising which, when unified to a complex service, increases its efficiency by orders of magnitude and the increase of the efficiency of the simple service when combined with the complex one produces an enormous surplus revenue flow which can be used to strengthen search even more.

But that’s the innocent part of the story and we don’t remain in the innocent part of the story for a variety of uses. I won’t be tedious on a Friday night and say it’s because the bourgeoisie is constantly engaged in destructively reinventing and improving its own activities and I won’t be moralistic on a Friday night that you can’t do that and say because sin is in-eradicable and human beings are fallen creatures and greed is one of the sins we cannot avoid committing. I will just say that as a sort of ordinary social process we don’t stop at innocent. We go on, which surely is the thing you should say on a Friday night. And so we went on.

Now, where we went on is really towards the discovery that all of this would be even better if you had all the logs of everything because once you have the logs of everything then every simple service is suddenly a goldmine waiting to happen and we blew it because the architecture of the Net put the logs in the wrong place. They put the logs where innocence would be tempted. They put the logs where the failed state of human beings implies eventually bad trouble and we got it.

The cloud means that we can’t even point in the direction of the server anymore and because we can’t even point in the direction of the server anymore we don’t have extra technical or non-technical means of reliable control over this disaster in slow motion. You can make a rule about logs or data flow or preservation or control or access or disclosure but your laws are human laws and they occupy particular territory and the server is in the cloud and that means the server is always one step ahead of any rule you make or two or three or six or poof! I just realized I’m subject to regulation, I think I’ll move to Oceana now.

Which means that in effect, we lost the ability to use either legal regulation or anything about the physical architecture of the network to interfere with the process of falling away from innocence that was now inevitable in the stage I’m talking about, what we might call late Google stage 1.

It is here, of course, that Mr. Zuckerberg enters.

The human race has susceptibility to harm but Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age.

Because he harnessed Friday night. That is, everybody needs to get laid and he turned it into a structure for degenerating the integrity of human personality and he has to a remarkable extent succeeded with a very poor deal. Namely, “I will give you free web hosting and some PHP doodads and you get spying for free all the time”. And it works.

That’s the sad part, it works.

How could that have happened?

There was no architectural reason, really. There was no architectural reason really. Facebook is the Web with “I keep all the logs, how do you feel about that?” It’s a terrarium for what it feels like to live in a panopticon built out of web parts.

And it shouldn’t be allowed. It comes to that. It shouldn’t be allowed. That’s a very poor way to deliver those services. They are grossly overpriced at “spying all the time”. They are not technically innovative. They depend upon an architecture subject to misuse and the business model that supports them is misuse. There isn’t any other business model for them. This is bad.

I’m not suggesting it should be illegal. It should be obsolete. We’re technologists, we should fix it.

I’m glad I’m with you so far. When I come to how we should fix it later I hope you will still be with me because then we could get it done.

But let’s say, for now, that that’s a really good example of where we went wrong and what happened to us because. It’s trickier with gmail because of that magical untouched by human hands-iness. When I say to my students, “why do you let people read your email”, they say “but nobody is reading my email, no human being ever touched it. That would freak me out, I’d be creeped out if guys at Google were reading my email. But that’s not happening so I don’t have a problem.”

Now, this they cannot say about Facebook. Indeed, they know way too much about Facebook if they let themselves really know it. You have read the stuff and you know. Facebook workers know who’s about to have a love affair before the people do because they can see X obsessively checking the Facebook page of Y. There’s some very nice research done a couple of years ago at an MIT I shouldn’t name by students I’m not going to describe because they were a little denting to the Facebook terms of service in the course of their research. They were just scraping but the purpose of their scraping was the demonstrate that you could find closeted homosexuals on Facebook.

They don’t say anything about their sexual orientation. Their friends are out, their interests are the interests of their friends who are out. Their photos are tagged with their friends who are out and they’re out except they’re not out. They’re just out in Facebook if anybody looks, which is not what they had in mind surely and not what we had in mind for them, surely. In fact, the degree of potential information inequality and disruption and difficulty that arises from a misunderstanding, a heuristic error, in the minds of human beings about what is and what’s not discoverable about them is not our biggest privacy problem.

My students, and I suspect many of the students of teachers in this room too, show constantly in our dialog the difficulty. They still think of privacy as “the one secret I don’t want revealed” and that’s not the problem. Their problem is all the stuff that’s the cruft, the data dandruff of life, that they don’t think of as secret in any way but which aggregates to stuff that they don’t want anybody to know. Which aggregates, in fact, not just to stuff they don’t want people to know but to predictive models about them that they would be very creeped out could exist at all. The simplicity with which you can de-anonymize theoretically anonymized data, the ease with which, for multiple sources available to you through third and fourth party transactions, information you can assemble, data maps of people’s lives. The ease with which you begin constraining, with the few things you know about people, the data available to you, you can quickly infer immense amounts more.

My friend and colleague Bradly Kuhn who works at the Software Freedom Law Center is one of those archaic human beings who believes that a social security number is a private thing. And he goes to great lengths to make sure that his Social Security is not disclosed which is his right under our law, oddly enough. Though, try and get health insurance or get a safe deposit box, or in fact, operate the business at all. We bend over backwards sometimes in the operation of our business because Bradly’s Social Security number is a secret. I said to him one day “You know, it’s over now because Google knows your Social Security number”. He said “No they don’t, I never told it to anybody”. I said, “Yeah but they know the Social Security number of everybody else born in Baltimore that year. Yours is the other one”.

And as you know, that’s true. The data that we infer is the data in the holes between the data we already know if we know enough things.

So, where we live has become a place in which it would be very unwise to say about anything that it isn’t known. If you are pretty widely known in the Net and all of us for one reason or another are pretty widely known in the Net. We want to live there. It is our neighborhood. We just don’t want to live with a video camera on every tree and a mic on every bush and the data miner beneath our feet everywhere we walk and the NET is like that now. I’m not objecting to the presence of AOL newbies in Usenet news. This is not an aesthetic judgment from 1995 about how the neighborhood is now full of people who don’t share our ethnocentric techno geekery. I’m not lamenting progress of a sort of democratizing kind. On the contrary, I’m lamenting progress of a totalizing kind. I’m lamenting progress hostile to human freedom. We all know that it’s hostile to human freedom. We all understand it’s despotic possibilities because the distopias of which it is fertile were the stuff of the science fiction that we read when we were children. The Cold War was fertile in the fantastic invention of where we live now and it’s hard for us to accept that but it’s true. Fortunately, of course, it’s not owned by the government. Well, it is. It’s fortunate. It’s true. It’s fortunate that it’s owned by people that you can bribe to get the thing no matter who you are. If you’re the government you have easy ways of doing it. You fill out a subpoena blank and you mail it.

I spent two hours yesterday with a law school class explaining in detail why the 4th Amendment doesn’t exist anymore because that’s Thursday night and who would do that on a Friday night? But the 4th Amendment doesn’t exist anymore. I’ll put the audio on the Net and the FBI and you can listen to it anytime you want.

We have to fess up if we’re the people who care about freedom, it’s late in the game and we’re behind. We did a lot of good stuff and we have a lot of tools lying around that we built over the last 25 years. I helped people build those tools. I helped people keep those tools safe, I helped people prevent the monopoly from putting all those tools in its bag and walking off with them and I’m glad the tools are around but we do have to admit that we have not used them to protect freedom because freedom is decaying and that’s what David meant in his very kind introduction.

In fact, people who are investing in the new enterprises of unfreedom are also the people you will hear if you hang out in Silicon Valley these days that open source has become irrelevant. What’s their logic? Their logic is that software as a service is becoming the way of the world. Since nobody ever gets any software anymore, the licenses that say “if you give people software you have to give them freedom” don’t matter because you’re not giving anybody software. You’re only giving them services.

Well, that’s right. Open source doesn’t matter anymore. Free software matters a lot because of course, free software is open source software with freedom. Stallman was right. It’s the freedom that matters. The rest of it is just source code. Freedom still matters and what we need to do is to make free software matter to the problem that we have which is unfree services delivered in unfree ways really beginning to deteriorate the structure of human freedom.

Like a lot of unfreedom, the real underlying social process that forces this unfreedom along is nothing more than perceived convenience.

All sorts of freedom goes over perceived convenience. You know this. You’ve stopped paying for things with cash. You use a card that you can wave at an RFID reader.

Convenience is said to dictate that you need free web hosting and PHP doodads in return for spying all the time because web servers are so terrible to run. Who could run a web server of his own and keep the logs? It would be brutal. Well, it would if it were IIS. It was self-fulfilling, it was intended to be. It was designed to say “you’re a client, I’m a server. I invented Windows 7, It was my idea. I’ll keep the logs thank you very much.” That was the industry. We built another industry. It’s in here. But it’s not in. Well, yeah it is kind of in here. So where isn’t it? Well it’s not in the personal web server I don’t have that would prevent me from falling…well, why don’t we do something about that.

What do we need? We need a really good webserver you can put in your pocket and plug in any place. In other words, it shouldn’t be any larger than the charger for your cell phone and you should be able to plug it in to any power jack in the world and any wire near it or sync it up to any wifi router that happens to be in its neighborhood. It should have a couple of USB ports that attach it to things. It should know how to bring itself up. It should know how to start its web server, how to collect all your stuff out of the social networking places where you’ve got it. It should know how to send an encrypted backup of everything to your friends’ servers. It should know how to microblog. It should know how to make some noise that’s like tweet but not going to infringe anybody’s trademark. In other words, it should know how to be you …oh excuse me I need to use a dangerous word – avatar – in a free net that works for you and keeps the logs. You can always tell what’s happening in your server and if anybody wants to know what’s happening in your server they can get a search warrant.

And if you feel like moving your server to Oceana or Sealand or New Zealand or the North Pole, well buy a plane ticket and put it in your pocket. Take it there. Leave it behind. Now there’s a little more we need to do. It’s all trivial. We need some dynamic DNS and all stuff we’ve already invented. It’s all there, nobody needs anything special. Do we have the server you can put in your pocket? Indeed, we do. Off the shelf hardware now. Beautiful little wall warts made with ARM chips. Exactly what I specked for you. Plug them in, wire them up. How’s the software stack in there? Gee, I don’t know it’s any software stack you want to put in there.

In fact, they’ll send it to you with somebody’s top of the charts current distro in it, you just have to name which one you want. Which one do you want? Well you ought to want the Debian Gnu Linux social networking stack delivered to you free, free as in freedom I mean. Which does all the things I name – brings itself up, runs it’s little Apache or lighttpd or it’s tiny httpd, does all the things we need it to do – syncs up, gets your social network data from the places, slurps it down, does your backup searches, finds your friends, registers your dynamic DNS. All is trivial. All this is stuff we’ve got. We need to put this together. I’m not talking about a thing that’s hard for us. We need to make a free software distribution device. How many of those do we do?

We need to give a bunch to all our friends and we need to say, here fool around with this and make it better. We need to do the one thing we are really really really good at because all the rest of it is done, in the bag, cheap ready. Those wall wart servers are $99 now going to $79 when they’re five million of them they’ll be $29.99.

Then we go to people and we say $29.99 once for a lifetime, great social networking, updates automatically, software so strong you couldn’t knock it over it you kicked it, used in hundreds of millions of servers all over the planet doing a wonderful job. You know what? You get “no spying” for free. They want to know what’s going on in there? Let them get a search warrant for your home, your castle, the place where the 4th Amendment still sort of exists every other Tuesday or Thursday when the Supreme Court isn’t in session. We can do that. We can do that. That requires us to do only the stuff we’re really really good at. The rest of it we get for free. Mr. Zuckerberg? Not so much.

Because of course, when there is a competitor to “all spying all the time whether you like it or not”, the competition is going to do real well. Don’t expect Google to be the competitor. That’s our platform. What we need is to make a thing that’s so greasy there will never be a social network platform again. Can we do it? Yeah, absolutely. In fact, if you don’t have a date on Friday night, let’s just have a hackfest and get it done. It’s well within our reach.

We’re going to do it before the Facebook IPO? Or are we going to wait till after? Really? Honestly? Seriously. The problem that the law has very often in the world where we live and practice and work, the problem that the law has very often, the problem that technology can solve. And the problem that technology can solve is the place where we go to the law. That’s the free software movement. There’s software hacking over here and there’s legal hacking over there and you put them both together and the whole is bigger than the sum of the parts. So, it’s not like we have to live in the catastrophe. We don’t have to live in the catastrophe. It’s not like what we have to do to begin to reverse the catastrophe is hard for us. We need to re-architect services in the Net. We need to re-distribute services back towards the edge. We need to de-virtualize the servers where your life is stored and we need to restore some autonomy to you as the owner of the server.

The measures for taking those steps are technical. As usual, the box builders are ahead of us. The hardware isn’t the constraint. As usual, nowadays, the software isn’t really that deep a constraint either because we’ve made so much wonderful software which is in fact being used by all the guys on the bad architecture. They don’t want to do without our stuff. The bad architecture is enabled, powered by us. The re-architecture is too. And we have our usual magic benefit. If we had one copy of what I’m talking about, we’d have all the copies we need. We have no manufacturing or transport or logistics constraint. If we do the job, it’s done. We scale.

This is technical challenge for social reason. It’s a frontier for technical people to explore. There is enormous social pay-off for exploring it.

The payoff is plain because the harm being ameliorated is current and people you know are suffering from it. Everything we know about why we make free software says that’s when we come into our own. It’s a technical challenge incrementally attainable by extension from where we already are that makes the lives of the people around us and whom we care about immediately better. I have never in 25 years of doing this work, I have never seen us fail to rise to a challenge that could be defined in those terms. So I don’t think we’re going to fail this one either.

Mr. Zuckerberg richly deserves bankruptcy.

Let’s give it to him. For Free.

And I promise, and you should promise too, not to spy on the bankruptcy proceeding. It’s not any of our business. It’s private.

This is actually a story potentially happy. It is a story potentially happy and if we do it then we will have quelled one more rumor about the irrelevance of us and everybody in the Valley will have to go find another buzz word and all the guys who think that Sandhill Road is going to rise into new power and glory by spying on everybody and monetizing it will have to find another line of work too, all of which is purely on the side of the angels. Purely on the side of the angels.

We will not be rid of all our problems by any means, but just moving the logs from them to you is the single biggest step that we can take in resolving a whole range of social problems that I feel badly about what remains of my American constitution and that I would feel badly about if I were watching the failure of European data protection law from inside instead of outside and that I would feel kind of hopeful about if I were, oh say, a friend of mine in China. Because you know of course we really ought to put a VPN in that wall wart.

And probably we ought to put a Tor router in there.

And of course, we’ve got bittorrent, and by the time you get done with all of that, we have a freedom box. We have a box that not merely climbs us out of the hole we’re in, we have a box that actually puts a ladder up for people who are deeper in the hole than we are, which is another thing we love to do.

I do believe the US State Department will go slanging away at the Chinese communist party for a year or two about internet freedom and I believe the Chinese communist party is going to go slanging back and what they’re going to say is “You think you’ve got real good privacy and autonomy in the internet voyear in your neighborhood?” And every time they do that now as they have been doing that in the last 2 weeks, I would say ouch if I was Hilary Clinton and I knew anything about it because we don’t. Because we don’t. It’s true. We have a capitalist kind and they have a centralist vanguard of the party sort of Marxist kind or maybe Marxist or maybe just totalitarian kind but we’re not going to win the freedom of the net discussion carrying Facebook on our backs. We’re not.

But you screw those wall wart servers around pretty thickly in American society and start taking back the logs and you want to know who I talked to on a Friday night? Get a search warrant and stop reading my email. By the way there’s my GPG key in there and now we really are encrypting for a change and so on and so on and so on and it begins to look like something we might really want to go on a national crusade about. We really are making freedom here for other people too. For people who live in places where the web don’t work.

So there’s not a challenge we don’t want to rise to. It’s one we want to rise to plenty. In fact, we’re in a happy state in which all the benefits we can get are way bigger than the technical intricacy of doing what needs to be done, which isn’t much.

That’s where we came from. We came from our technology was more free than we understood and we gave away a bunch of the freedom before we really knew it was gone. We came from unfree software had bad social consequences further down the road than even the freedom agitators knew. We came from unfreedom’s metaphors tend to produce bad technology.

In other words, we came from the stuff that our movement was designed to confront from the beginning but we came from there. And we’re still living with the consequences of we didn’t do it quite right the first time, though we caught up thanks to Richard Stallman and moving on.

Where we live now is no place we’re going to have to see our grandchildren live. Where we live now is no place we would like to conduct guided tours of. I used to say to my students how many video cameras are there between where you live and the Law school? Count them. I now say to my students how many video cameras are there between the front door to the law school and this classroom? Count them.

I now say to my students “can you find a place where there are no video cameras?” Now, what happened in that process was that we created immense cognitive auxiliaries for the state – enormous engines of listening. You know how it is if you live in an American university thanks to the movie and music companies which keep reminding you of living in the midst of an enormous surveillance network. We’re surrounded by stuff listening to and watching us. We’re surrounded by mine-able data.

Not all of that’s going to go away because we took Facebook and split it up and carried away our little shards of it. It’s not going to go away cause we won’t take free webhosting with spying inside anymore. We’ll have other work to do. And some of that work is lawyers work. I will admit that. Some of that work is law drafting and litigating and making trouble and doing lawyer stuff. That’s fine. I’m ready.

My friends an I will do the lawyers part. It would be way simpler to do the lawyer’s work if we were living in a society which had come to understand it’s privacy better. It would be way simpler to do the lawyer’s work if young people realize that when they grow up and start voting or start voting now that they’re grown up, this is an issue. That they need to get the rest of it done the way we fixed the big stuff when we were kids. We’ll have a much easier time with the enormous confusions of international interlocking of regimes when we have deteriorated the immense force of American capitalism forcing us to be less free and more surveilled for other people’s profit all the time. It isn’t that this gets all the problems solved but the easy work is very rich and rewarding right now.

The problems are really bad. Getting the easy ones out will improve the politics for solving the hard ones and it’s right up our alley. The solution is made of our parts. We’ve got to do it. That’s my message. It’s Friday night. Some people don’t want to go right back to coding I’m sure. We could put it off until Tuesday but how long do you really want to wait? You know everyday that goes by there’s more data we’ll never get back. Everyday that goes by there’s more data inferences we can’t undo. Everyday that goes by we pile up more stuff in the hands of the people who got too much. So it’s not like we should say “one of these days I’ll get around to that”. It’s not like we should say “I think I’d rather sort of spend my time browsing news about iPad”.

It’s way more urgent than that.

It’s that we haven’t given ourselves the direction in which to go so let’s give ourselves the direction in which to go. The direction in which to go is freedom using free software to make social justice.

But, you know this. That’s the problem with talking on a Friday night. You talk for an hour and all you tell people is what they know already.

So thanks a lot. I’m happy to take your questions.