The Death of Privacy?

A. Michael Froomkin (1)

"You have zero privacy. Get over it."

--Sun Microsystems, Inc., CEO Scott McNealy (2)

DRAFT - Please do not cite or quote without checking with author

A cleaner, and more final, copy is available as a .pdf file

Introduction

Information, as we all know, is power. Both collecting and collating personal information are means of acquiring power, usually at the expense of the data subject. Whether this is desirable depends upon who the viewer and subject are and who is weighing the balance. People have long been believed, for example, that the citizen's ability to monitor the state tends to promote honest government, that "[s]unlight is . . . the best of disinfectants." (3) One need look no further than the First Amendment of the United States Constitution to be reminded that protecting the acquisition and dissemination of information is an essential means of empowering citizens in a democracy. Conversely, at least since George Orwell's 1984, if not Bentham's Panopticon, the image of the all-seeing eye, the Argus state, has been synonymous with the power to exercise repression. Today, the all-seeing eye need not necessarily belong to the government, as many in the private sector find it valuable to conduct various forms of surveillance or to "mine" data collected by others. For example, employers continually seek new ways to monitor employees for efficiency and honesty; firms trawl databases for preference information in the search for new customers. Even an infrequently exercised capability to collect information confers power on the potential observer at the expense of the visible: Knowing you may be watched affects behavior. Modern social science confirms our intuition that people act differently when they know they are on Candid Camera--or Big Brother Cam. (4)

In this article, I will use "informational privacy" as shorthand for the ability to control the acquisition or release of information about oneself. (5) I will argue that both the state and the private sector now enjoy unprecedented abilities to collect personal data, and that technological developments suggest that costs of data collection and surveillance will decrease, while the quantity and quality of data will increase. I will also argue that, when possible, the law should facilitate informational privacy because the most effective way of controlling information about oneself is not to share it in the first place.

Most of this article focuses on issues relating to data collection and not data collation. Much of the best work on privacy, and the most comprehensive legislation, (6) while not ignoring issues of data collection nonetheless focuses on issues relating to the storage and reuse of data. Privacy-enhancing legal and policy analysis often proceeds on the reasonable theory that because the most serious privacy-related consequences of data acquisition happen after the fact, and require a database, the use and abuse of databases is the appropriate focus for regulation. This article concentrates on the logically prior issue of data collection. Issues of data use and re-use cannot be avoided, however, because one of the ways to reduce data collection is to impose limits on the use of improperly collected data. Conversely, if limits on initial data collection are constitutional, then it is more likely that efforts to prohibit the retransmission or republishing of illicitly collected data would be held to be constitutional as well.

A data subject has significantly less control over personal data once information is in a database. The easiest way to control databases, therefore, is to keep information to oneself: If information never gets collected in the first place, database issues need never arise. It may be that "[t]hree can keep a secret--if two of them are dead," (7) but in the world of the living we must find kinder, gentler solutions. Although privacy-enhancing technologies such as encryption provide a limited ability to protect some data and communications from prying eyes and ears, it seems obvious that total secrecy of this sort is rarely a practical possibility today unless one lives alone in a cabin in the woods. One must be photographed and fill out a questionnaire to get a driver's license, show ID to get a job. (8) Our homes are permeable to sense-enhanced snooping; our medical and financial data is strewn around the datasphere; our communications are easily monitored; our lives are an open book to a mildly determined detective. Personal lives are becoming increasingly transparent to governments, interested corporations, and even to one another--as demonstrated by notorious incidents of phone eavesdropping or taping involving diverse individuals such as Britain's Prince Charles, House Speaker Newt Gingrich, and White House Intern Monica Lewinsky. (9) This general trend is driven by technological innovation and by economic and social forces creating a demand for privacy-destroying technologies. When solitude is not an option, personal data will be disclosed 'voluntarily' for transactions or emitted by means beyond our control. What remains to be determined is which legal rules should govern the collection as well as the use of this information.

In light of the rapid growth of privacy-destroying technologies, it is increasingly unclear whether informational privacy can be protected at a bearable cost, or whether we are approaching an era of zero informational privacy, a world of what Roger Clarke calls "dataveillance." (10) Part I of this article describes a number of illustrative technological developments that facilitate the collection of personal data. Collectively these and other developments provide the means for the most overwhelming assault on informational privacy in the recorded history of humankind. That surveillance technologies threaten privacy may not be breaking news, but the extent to which these technologies will soon allow watchers to permeate modern life still has the power to shock. Nor is it news that the potential effect of citizen profiling is vastly increased by the power of information processing and the linking of distributed databases. We are still in the early days of data mining, consumer profiling, and DNA databasing, to name only a few. The cumulative and accelerating effect of these developments, however, has the potential to transform modern life in all industrialized countries. Unless something happens to counter these developments, it seems likely that soon all but the most radical privacy freaks may live in the informational equivalent of a goldfish bowl. (11)

If the pace at which privacy-destroying technologies are being devised and deployed is accelerating, the basic phenomenon is nevertheless old enough to already have spawned a number of laws and proposed legal or social solutions designed to protect or enhance privacy in various ways. Part II of this article examines several of these proposed privacy enhancing policies in light of the technologies discussed in Part I. It suggests that some will be ineffective, that others will have undesirable or unconstitutional effects, and that even the best will protect only a narrow range of privacy on their own.

The relative weakness of current privacy-enhancing strategies sets the stage for the conclusion of the article, which challenges the latest entry to the privacy debate--the counsel of despair epitomized by Scott McNealy's suggestion that the battle for privacy was lost almost before it was waged. Although there is a disturbingly strong case supporting this view, a case made trenchantly by David Brin's The Transparent Society, (12) I conclude by suggesting that all is not yet lost. While there may be no single tactic that suffices to preserve the status quo, much less regain lost privacy, a smorgasbord of creative technical and legal approaches could make a meaningful stand against what otherwise seems inevitable.

A focus on informational privacy may seem somewhat crabbed and limited. Privacy, after all, encompasses much more than just control over a data trail, or even a set of data. It encompasses ideas of bodily and social autonomy, of self-determination, and of the ability to create zones of intimacy and inclusion that define and shape our relationships with each other. Control over personal information is a key aspect of some of these ideas of privacy, and is alien to none of them. On the other hand, given that we live in an age of ubiquitous social security numbers, (13) not to mention televised public talk-show confessionals and other forms of media-sanctioned exhibitionism and voyeurism, (14) it may seem reactionary to worry about informational privacy. It also may be that mass privacy is a recent invention, rarely experienced before the nineteenth century save in the hermitage or on the frontier. (15) Perhaps privacy is a luxury good by world standards, and right-thinking people should concentrate their energies on more pressing matters, such as war, famine, or pestilence. And perhaps it really is better to be watched, that the benefits of mass surveillance and profiling outweigh the costs. Nevertheless, in this article I will assume that informational privacy is a good in itself, (16) and a value worth protecting, (17) although not at all costs. (18)

I. Privacy-Destroying Technologies

Privacy-destroying technologies can be divided into two categories: those that facilitate the acquisition of raw data and those which allow one to process and collate that data in interesting ways. Although both real and useful, the distinction can be overstated because improvements in information processing also make new forms of data collection possible. Cheap computation makes it easy to collect and process data on the keystrokes per minute of clerks, secretaries, and even executives. It also makes it possible to monitor their web browsing habits. (19) Cheap data storage and computation also makes it possible to mine the flood of new data, creating new information by the clever organization of existing data.

Another useful taxonomy would organize privacy-destroying technologies by their social context. One could focus on the characteristics of individuals about whom data is being gathered (e.g., citizen, employee, patient, driver, consumer). Or, one could focus instead on the different types of observers (e.g., intelligence agencies, law enforcement, tax authorities, insurance companies, mall security, e-commerce sites, concerned parents, crazed fans, ex-husbands, nosy neighbors). At the most basic level, initial observers can be broadly categorized as either governmental or private, although here too the importance of the distinction can be overstated, because private parties often have access to government databases and governments frequently purchase privately collected data. There are some types of data collection that only the government can undertake, for example, the capture of information on legally mandated forms such as the census, driver's licenses, or tax returns. But even these examples illustrate the danger of being too categorical: some states make driver's license data and even photographs available for sale or search, and many tax returns are filed by commercial preparers (or web-based forms), giving a third party access to the data.

Databases multiply the effects of sensors. For example, cameras have a far less intrusive effect on privacy if their only use is to be monitored in real time by operators watching for commission of crimes. The longer the tapes are archived, the greater their potential effect. And, the more that the tapes can be indexed according to who and what they show rather than just where and when they were made, the more easily the images can be searched or integrated into personal profiles. Equally important, databases make it possible to create new information by combining existing data in new and interesting ways. Once created or collected, data is easily shared and hard to eradicate; the data genie does not go willingly, if ever, back into the bottle.

Reams of data organized into either centralized or distributed databases can have substantial consequences beyond the simple loss of privacy caused by the initial data collection, especially when subject to advanced correlative techniques such as data mining. (20) Among the possible harmful effects are various forms of discrimination, ranging from price discrimination to more invidious sorts of discrimination. (21) Data accumulation enables the construction of personal data profiles. (22) When the data are available to others, they can construct personal profiles for targeted marketing, (23) and even, in rare cases, blackmail. (24) For some, just knowing that their activities are being recorded may have a chilling effect on conduct, (25) speech, and reading. (26) Customers may find it discomfiting to discover that a salesperson knows their income or indebtedness, or other personal data.

When the government has access to the data, it not only gains powerful investigative tools allowing it to plot the movements, actions, and financial activities of suspects, (27) but it also gains new techniques for detecting crimes and identifying suspects. (28) Ultimately, if data is collected on everyone's location and on all transactions, it should be possible to achieve perfect law enforcement, a world in which no transgression goes undetected and, perhaps, unpunished. (29) At that point, the assumptions of imperfect detection, the need for deterrence, and the reliance on police and prosecutorial discretion on which our legal system is based will come under severe strain.

A further danger is that the government or others will attempt to use the ability to construct personal profiles in order to predict dangerous or antisocial activities before they happen. People whose profiles meet the criteria will be flagged as dangerous and perhaps subjected to increased surveillance, searches, or discrimination. Profiling is currently used to identify airline passengers who the profilers think present an above-average risk of being terrorists. (30) In the wake of the tragedy at Colombine, schools are turning to profiling to assess children for potential violence. (31) In a world where such profiling is common, who will dare to act in a way that will cause red flags to fly?

In a thorough survey, Roger Clarke suggested that the collection and collation of large amounts of personal data create many dangers at both the individual and societal levels, including:

Dangers of Personal Dataveillance
lack of subject knowledge of data flows
blacklisting
Dangers of Mass Dataveillance
To the Individual
witch hunts
ex-ante discrimination and guilt prediction
selective advertising
inversion of the onus of proof
covert operations
unknown accusations and accusers
denial of due process
To Society
prevailing climate of suspicion
adversarial relationships
focus of law enforcement on easily detectable and provable offences
inequitable application of the law
stultification of originality
increased tendency to opt out of the official level of society
weakening of society's moral fibre and cohesion
repressive potential for a totalitarian government (32)

There is little reason to believe that the nosiness of neighbors, employers, or governments has changed recently. What is changing very rapidly, however, is the cost and variety of tools available to acquire personal data. The law has done such a poor job of keeping pace with these developments that some people have begun to suggest that privacy is becoming impossible.

A. Routinized Low-Tech Data Collection

Large quantities of personal data are routinely collected in the United States today without any high-tech equipment. Examples include the collection of personal data by the Federal Government for taxes and the census, data collected by states as a condition of issuing driver's licenses, and the vast amounts of data collected by the private sector in the course of selling products and services.

1. By the United States government.

The most comprehensive, legally mandated United States government data collections are the annual collection of personal and corporate tax data, and the decennial census. Both of these data collection activities are protected by unusually strict laws designed to prevent the release of personally identifiable data. (33) Other government data collection at the federal and state level is either formally optional, or aimed at subsets of the population. Some of these subsets, however, are very large. (34)

Anyone who takes a new job must be listed in the "new hires directory" designed to support the Federal Parent Locator Service. (35) This growing national database of workers enables courts to enforce court-ordered child support against working parents who are not making their support payments. Each state has its own database, which is coordinated by the Office of Child Support Enforcement within the Department of Health and Human Services. (36) Anyone receiving public assistance is likely to be in a state maintained database of aid recipients. Federal, state, and local governments also collect data from a total of about fifteen million arrestees each year. (37) The government continues to collect (and publish) data about some convicts even after they have served their sentences. (38)

License applications are formally optional data collections that have wide application--licenses are optional, but if one wants a license, one must answer the required questions. Perhaps the most widespread data collection comes from driver's license applications, as most of the United States adult population hold driver's licenses, at least outside the few major cities with efficient mass transportation networks. In addition to requesting personal data such as address, telephone number, and basic vital statistics, some states collect health-related information, and all require a (frequently digitized) photograph.
2. Transactional data.

Any personal transaction involving money, be it working, buying, selling, or investing, tends to create a data set relating to the transaction. Unless the payment is in cash, the data set usually includes some personal data about the individual(s) involved in the transaction.

Financial data collection is an interesting example of the private sector collecting data for mixed motives. A single firm, Acxiom, now holds personal and financial information about almost every United States, United Kingdom, and Australian consumer. (39) In many cases, banks and other financial service providers collect information about their clients because the data has commercial value. In other cases, they record data because the government requires them to make routine reports to assist law enforcement efforts. In effect, private banks often act as agents of state data collection efforts.

Until machines for tracking bills by their serial numbers become much more common than today, cash payment will remain relatively anonymous. In their quest to gather personal data about customers, merchants have turned to loyalty reward programs, such as frequent shopper cards and grocery club cards. Depending upon the sophistication of the card, and of the system of which it is a part, these loyalty programs can allow merchants to amass detailed information about their customers.

Large amounts of cash trigger reporting requirements, which in turn means that financial intermediaries must collect personal data from their customers. Anti-money laundering laws (and sometimes tax laws) require financial service providers to file reports on every suspicious transaction and every time a client deposits, withdraws, or transfers $10,000 or more. Some firms, often chosen because of their location in neighborhoods thought by law enforcement to be high drug trading zones, must report transactions involving as little as $750 in cash. (40)

Alternatives to cash, such as checks, debit cards, and credit cards, create a data trail that identifies the purchaser, the merchant, the amount of the sale, and sometimes the goods or services sold.

Whether replacing paper cash with electronic cash would make transactions more secure and anonymous or create a digital data trail linking every transaction to the parties involved depends entirely on how such an electronic cash system is designed. Both extremes are possible, as are intermediate designs in which, for example, the identity of the payer is not recorded (or even identifiable), but the payee is known to the bank that issued the electronic cash. (41) Because there is currently no standard for electronic cash and relatively little e-cash in circulation, anything remains possible.

Large quantities of medical data are generated and recorded during any sustained interaction with the United States health care system. In addition to being shared among various health care providers, the information is also shared with the entities that administrate the system. (42) Under the "Administrative Simplification" provision of the Health Insurance Portability and Accountability Act of 1996 ("HIPAA"), (43) standards are being developed to facilitate the electronic transfer of health-related personal data. HIPPA requires that all health information be kept in electronic form and that each individual be given a unique health identifier to index the data.

Thus, even without high technology, substantial amounts of personal data are routinely collected about almost everyone in the country. The introduction of new technologies, however, promises to raise the quantity and nature of the information that could be collected to new, somewhat dizzying, heights.

B. Ubiquitous Surveillance

Unless social, legal, or technical forces intervene, it is conceivable that there will be no place on earth where an ordinary person will be able to avoid surveillance. In this possible future, public places will be watched by terrestrial cameras and even by satellites. Facial and voice recognition software, cell phone position monitoring, smart transport, and other science-fiction-like developments will together provide full and perhaps real time information on everyone's location. Homes and bodies will be subject to sense-enhanced viewing. All communications, save perhaps some encrypted messages, will be scannable and sortable. Copyright protection "snitchware" (44) and Internet-based user tracking will generate full dossiers of reading and shopping habits. The move to web-based commerce, combined with the fight against money laundering and tax evasion, will make it possible to assemble a complete economic profile of every consumer. All documents, whether electronic, photocopied, or (perhaps) even privately printed, will have invisible markings making it possible to trace the author. Workplaces will not only be observed by camera, but also anything involving computer use will be subject to detailed monitoring, analyzed for both efficiency and inappropriate use. As the cost of storage continues to drop, enormous databases will be created, or disparate distributed databases linked, allowing data to be cross-referenced in increasingly sophisticated ways.

In this very possible future, indeed perhaps in our present, (45) there may be nowhere to hide and little that can stay hidden.

1. Public spaces.

Moving about in public is not truly anonymous: Someone you know may recognize you, and anyone can write down the license plate number of your car. Nevertheless, at least in large cities, one enjoys the illusion, and to a large extent the reality, of being able to move about with anonymity. That freedom is soon to be a thing of the past, as the "privacy commons" of public spaces becomes subject to the enclosure of privacy-destroying technology.

Fear of crime, and the rapidly declining cost of hardware, bandwidth, and storage, are combining to foster the rapid spread of technology for routinely monitoring public spaces and identifying individuals. Monitoring technologies include cameras, facial recognition software, and various types of vehicle identification systems. Related technologies, some of which have the effect of allowing real-time monitoring and tracking of individuals, include cell-phone location technology and various types of biometric identifiers.

a. Cameras.

Perhaps the most visible way in which spaces are monitored is the increasingly ubiquitous deployment of Closed Circuit Television ("CCTV") cameras and video recorders. Monitoring occurs in both public and private spaces. Generally, private spaces such as shopping malls are monitored by private security, while public spaces are monitored by law enforcement. Although public cameras are common in the United States, (46) they are even more widespread abroad. Perhaps because of fears of IRA terrorism, in addition to ordinary concerns about crime, the United Kingdom has pursued a particularly aggressive program of blanketing the nation with cameras. Cameras operated by law enforcement "are now a common feature of Britain's urban landscape. . . . The cameras have also moved beyond the city, into villages, schools, hospitals and even, in Bournemouth, covering a coastal path." (47) Cameras are also commonly used on the roads to enforce speed limits by taking photos of speeding vehicles' license plates. Polls suggest that a substantial majority of the British public approves of the cameras because they make them feel safer. And indeed, the evidence suggests that cameras reduce, or at least displace, street crime and perhaps other antisocial behaviors. (48)

Cameras can also be placed in the office, school, and home. Visible cameras allow parents to keep an eye on junior at day care. Hidden cameras can be concealed in "clocks, radios, speakers, phones, and many other items" (49) to monitor caregivers and others in the home.

Cameras are also an example of how technologies can interact with each other to multiply privacy-destroying effects. All of the videotapes in the world are of little use unless there is someone to monitor them, a useful way to index the contents, or a mechanical aid to scan through them. And, pictures alone are only useful if there is a way to identify the people in them. Thus, for example, the London Police obtained excellent quality photographs of alleged participants in a violent demonstration in the City of London on June 18, 1998, but had to post the photographs on the Internet and ask viewers for help in identification--it worked in some cases. (50)

Human monitors are expensive and far from omniscient. (51) In the near future, however, human observers will become much less important as the task of analyzing still photos and videos will be mechanized. In some cases, such as schools, offices, or prisons, data subjects can be compelled to wear IDs with bar codes. (52) In public, however, more sophisticated technologies, such as facial recognition technology, are needed to identify people. Facial recognition technology is becoming better and more reliable every year. (53) Current systems are already capable of picking out people present in two different pictures, allowing police to identify repeat demonstrators even in large crowds assembled many weeks apart. The London police installed a system called "Mandrake" that matches CCTV photos taken from 144 cameras in shopping centers, parking lots, and railway stations against mug shots of known criminals. (54) The Israeli government plans to use facial recognition technology in the hope of creating "Basel," an automated border-crossing system. (55) The United States Pentagon is also investigating the possibility of using facial recognition systems to identify potential terrorists outside military facilities. (56)

Once mated with, for example, a database full of driver's license photos, images from a series of ubiquitous cameras could be indexed by name and stored for an indefinite period of time. (Indeed, the United States Secret Service and other agencies have expressed interest in a national database of drivers licence photos, and the government has spent at least $1.5 million helping a private corporation amass the data.) (57) Assuming the index and the videos are at least subject to subpoena (or perhaps the Freedom of Information Act) or even routinely placed on the Internet, alibis, mystery novels, and divorce proceedings will never be the same. One's face will nonetheless become an index marker. Devices will be available that warn you every time an individual convicted of rape or child molestation comes within 100 feet. Stores will be able to send coupons to window shoppers who browsed but did not enter ("Hi! Next time, wouldn't you like to see what we have inside?"). Worse still, once you enter, the store will be able to determine which merchandise to show you and how much to charge. (58)

b. Cell phone monitoring.

Many people can be tracked today without the use of cameras or any other device. Cellular phones must communicate their location to a base station in order to carry or receive calls. Therefore, whenever a cell phone is in use, or set to receive calls, it effectively identifies the location of its user every few minutes (within an area defined by the tolerance of the telephone). Recently, Maryland and Virginia officials unveiled a plan to use mobile phone tracking information to monitor traffic flows, although their plan does not involve capturing the identities of individual commuters, only their movements. (59)

The finer the cell phone zone, the more precisely a person's location can be identified. In the United States, a Federal Communications Commission ("FCC") regulation due to become effective in 2001 requires all United States cellular carriers to ensure that their telephones and networks will be able to pinpoint a caller's location to within 400 feet, about half a block, at least sixty-seven percent of the time. (60) The original objective of the rule was to allow emergency 911 calls to be traced, but the side-effect will be to turn cell phones into efficient tracking devices. Indeed, in a recent order, the FCC confirmed that wireline, cellular, and broadband Personal Communications Services (PCS) carriers would be required to disclose to law enforcement agents with wiretap authorization the location of a cell site at the beginning and termination of a mobile call. This was less than the FBI, the Justice Department, and the New York Police Department wanted; they had argued that they should be entitled to all location information available to the carrier. (61)

Governments are not the only ones who want to know where people are. Parents could use cell phone tracking to locate their children (or where they left the phone). Merchants are also interested in knowing who is in the neighborhood. A United Kingdom cell phone company is sending "electronic vouchers" to its six million subscribers, informing them of "special offers" from pubs in the area from which they are calling and helpfully supplying the nearby address. (62)

The privacy-destroying consequences of cell phone tracking increase dramatically when movement is are archived. It is one thing to allow police to use the data to track a fugitive in real time. It is another thing to archive the data, perhaps even in perpetuity, in case police or others wish to reconstruct someone's movements. In 1997, a Swiss newspaper revealed that a local phone company kept information recording the movement of one million subscribers, accurate to within a few hundred meters, and that the data was stored for more than six months. Swiss police described the data as a treasure trove. (63) However atypical the collection and retention of cellular phone subscribers' movements may be, the Swiss phone company's actions are clearly not unique. (64) The Swiss government, at least, values this locational data so highly that it will go to great lengths to preserve its access to it. Reports in 1998 suggested that the Swiss police felt threatened by the ability of Swiss cell phone users to buy prepaid phone cards that would allow certain types of "easy" telephones to be used anonymously. The Swiss government therefore proposed that citizens be required to register when acquiring "easy" cell phones, arguing that being able to identify who is using a cell phone was "essential" to national security. (65)

c. Vehicle monitoring.

Automobiles are a separate potential target of blanket surveillance. So-called "intelligent transportation systems" ("ITS") are being introduced in many urban areas to manage traffic flow, prevent speeding, and in some cases implement road pricing or centralized traffic control. (66) Ultimately, ITS promise continuous, real-time information as to the location of all moving vehicles. (67) Less complex systems already create travel records that can be stored and accessed later. (68) Some countries have also considered putting bar codes on license plates to ease vehicle identification. (69) While it is possible to design ITS in a manner that preserves the traveler's anonymity, (70) this has not been the norm.

2. Monitoring in the home and office.

Staying home may be no defense against monitoring and profiling. Existing technology can monitor every electronic communication, be it a telephone call, fax, or email. In the United States, at least, its use by either the government or private snoops is subject to substantial legal restrictions. As voiceprint, voice recognition, and content-analysis technology continue to improve, the tasks of sorting the ever-increasing volume of communications will be subjected to increasingly sophisticated automated processing. (71) Meanwhile, a number of legal technologies are already being deployed to track and archive many uses of the web.

a. Workplace surveillance.

Outside of restrooms, and the few laws banning wiretapping and reading email during transmission, (72) there are relatively few privacy protections applicable to every workplace in the nation. (73) Thus, employers may use hidden cameras, monitoring software, and other forms of surveillance more or less at will. (74) A 1993 survey, taken long before surveillance technology got cheap, showed that twenty million workers were subject to monitoring of their computer files, voice and electronic mail, and other networking communications. (75) Today, digital cameras are so small they fit on a one-inch by two-inch chip. Miniaturization lowers costs, which are expected to fall to only a few dollars per camera. (76) At these prices and sizes, ubiquitous and hidden monitoring is easily affordable. Software designed to capture keystrokes, either overtly or surreptitiously, is also readily available. For example, a program called "Investigator 2.0" costs under one hundred dollars and, once installed on the target PC, covertly monitors everything that it does and routinely emails detailed reports to the boss. (77) In addition, every technology described below that can be targeted at the home can also be targeted at the office.

b. Electronic communications monitoring.

According to a report prepared for the European Parliament, the United States and its allies maintain a massive worldwide spying apparatus capable of capturing all forms of electronic communications. (78) Known as "Echelon," the network can "access, intercept and process every important modern form of communications, with few exceptions." (79) The network is supported by a variety of processing technologies. Voiceprint recognition makes it possible to determine whether any of the participants in a call are on a watch list. If they are, the recording can be routed to a human being for review. (80) Similarly, text messages such as faxes and emails can be run through so-called dictionary programs that flag messages with interesting references or word patterns. (81) As artificial intelligence improves, these programs should become increasingly sophisticated. Meanwhile, advances in voice recognition (translating speech into text) promise to transform the telephone monitoring problem into another type of text problem. Further, once a conversation is converted into text, the National Security Agency ("NSA") is ready to gauge its importance with semantic forests: The NSA recently received a patent on a computerized procedure that produces a topical summary of a conversation using a "tree-word-list" to score the text. The patent describes a "pre-processing" phase that removes "stutter phrases" from a transcript. Then, a computer automatically assigns a label, or topic description, to the text. (82) The method promises to allow computerized sorting and retrieval of transcripts and other documents based upon their meaning, not just keywords. (83)

Not only have the communications intelligence agencies of the United States and its major allies "reaffirmed their requirements for access to all the world's communications," (84) but they have also taken a number of steps in the past two years to ensure they can get it. The NSA installed "sniffer" software to monitor and collect traffic at nine major Internet exchange points. (85) On May 7, 1999, the European Parliament passed the Lawful Interception of Communications Resolution on New Technologies, known as Enfopol. Although the Enfopol resolution is nonbinding, it serves as a declaration of the regulatory agenda of the European law enforcement community. Under the Enfopol proposal, Internet service providers and telephone companies in Europe would be required to provide law enforcement agencies with full-time, real-time access to all Internet transmissions. In addition, wireless communications providers would be required to provide geographical position information locating their cell phone customers. If the service provider offers encryption as part of the cell phone service, the provider would be required to ensure that it be able to decode the messages. (86)

Similarly, in the United States, the Communications Assistance for Law Enforcement Act of 1994 ("CALEA") requires that all new telecommunications networks be engineered to allow lawful wiretaps, although it does not address the issue of encryption. (87) The legislation also does not specify how many simultaneous wiretaps the network should be able to support, leaving this to the implementing regulations. In its initial assessment of "capacity requirements," the FBI proposed requiring carriers in major urban areas to install a maximum surveillance capacity of one percent of "engineered capacity"--in other words, to make it possible for a maximum of one out of every one hundred phone lines to be monitored simultaneously. (88) This proposal was so controversial that the FBI withdrew it and substituted a different capacity projection. (89) Although not free from all ambiguity, the revised rule appears to require very large capacity provisions. For example, the Center for Democracy and Technology calculated that under the formula proposed by the FBI, the system would have to be able to perform 136,000 simultaneous intercepts in the Los Angeles area alone. (90)

Domestic wiretapping without a court order is illegal in the United States, and only law enforcement and counter-intelligence agencies are allowed to apply for warrants. (91) State and federal courts authorized 1329 wiretaps in 1998, an increase of eighty percent over the 738 authorized a decade earlier. (92) These statistics are somewhat misleading, however, because a single wiretap order can affect hundreds of phone lines and up to 100,000 conversations. (93) The statistics are also difficult to reconcile with reports, attributed to the FBI, that on peak days up to one thousand different telephone lines are tapped in the Los Angeles area. (94) Although the number of wiretap orders is increasing, and the number of persons subject to legal eavesdropping is also increasing, these statistics are still small compared to the enormous volume of telecommunications. One reason why wiretaps remain relatively rare may be that judges have to approve them (although the number of wiretaps refused annually is reputed to be near zero); another, perhaps more important reason, is that they are expensive. The average cost of a wiretap is over $57,000, (95) with much of the expense attributable to paying the people who listen to the calls. However, as technology developed by intelligence agencies trickles down to domestic law enforcement, the marginal cost of telephone, fax, and email surveillance should decline considerably. Even if domestic law enforcement agencies remain scrupulously within the law, (96) the number of legal wiretaps is likely to increase rapidly once the cost constraint is reduced. (97)

c. Online tracking.

The worldwide web is justly celebrated as a cornucopia of information available to anyone with an Internet connection. The aspects of the web that make it such a powerful information medium (its unregulated nature, the flexibility of browsing software and the underlying protocols, and its role as the world's largest library, shopping mall, and chat room) all combine to make the web a fertile ground for harvesting personal data about Internet surfers. The more that people rely on the web for their reading and shopping, the more likely it becomes that data about their interests, preferences, and economic behavior will be captured and made part of personal profiles.

The baseline level of user monitoring is built into the most popular browsers and operates by default. Clicking on a link instructs a browser to automatically disclose the referring page to the new site. If a person has entered a name or email address in the browser's communication software that too will be disclosed automatically. (98) These features cannot be turned off--they are part of the hypertext transfer protocol--although one can delete one's name and email address from the software. Web surfers can, however, employ privacy-enhancing tools such as the anonymizer to mask personal information. (99)

The default setting on the two most popular browsers (Internet Explorer and Netscape Navigator) allows web sites to set and read all the "cookies" they want. Cookies are a means by which a browser allows a web site to write data a user's hard drive. (100) Often this works to the user's advantage--stored passwords eliminate the need to memorize or retype passphrases. Preference information allows a web designer to customize web pages to match individual users' tastes. But the process is usually invisible; and even when made visible, it is not transparent since few cookies are user-readable.

Cookies present a number of potential privacy problems. Any user data disclosed to a site, such as an address or phone number, can be embedded in a cookie. That information can then be correlated with user ID numbers set by the site to create a profile. If taken to its limit, this would permit a particularly intrusive site to build a dossier on the user. An online newspaper might, for example, keep track of the articles a reader selects, allowing it over time to construct a picture of the reader's interests. Cookies can be shared between web sites, allowing savvy web designers to figure out what other sites their visitors patronize, and (to the extent the other sites store information in cookies) what they have revealed to those other sites. When pieced together, this "clicktrail" can quietly reveal both personal and commercial information about a user without her ever being aware of it. A frequent visitor to AIDS sites, a regular purchaser of anti-cancer medicine, or even someone who has a passion for Barry Manilow, all may have reasons for not wanting others to know of their interests or actions.

Complicating matters, what appears as one page in a browser may actually be made up of multiple parts originating from multiple servers. Thus, it is possible to embed visible, or even invisible, content in a web page, which provides an occasion for setting a cookie. Doubleclick, an Internet advertising company, serves ads that appear on a large number of commercial and advertising-supported web pages. By checking for the Doubleclick cookie, the company can assign a unique identifier to each surfer and not only trace which Doubleclick-affiliated web sites they visit, but also when, how often, and what they choose to view while they are there. (101)

Cookies, however, are only the tip of the iceberg. Far more intrusive features can be integrated into browsers, into software downloaded from the Internet, (102) and into viruses or Trojan horses. (103) In the worst case, the software could be configured to record every keystroke.

The United States government suggested that Congress should authorize law enforcement and counter-intelligence agencies to remotely access and plant a back door in suspects' computers. (104) Using a back door could give the government access to every keystroke, allowing it to learn passwords and decrypt files protected with strong, otherwise uncrackable, cryptography. (105) The proposal in the original draft of the Cyberspace Electronic Security Act was sufficiently ambiguous that some imagined the government might even contract with makers of popular software to plant back doors that could be activated remotely as part of an investigation. [SEE NOTE]Instead, the clause in question, § 2713, was quickly dropped in the face of furious opposition from civil liberties groups. (106) Other countries have considered similar plans. For example, according to the uncensored version of the Australian Walsh Report, (107) intelligence agencies sought authority to alter software or hardware so that it would function as a bugging device, capturing all user keystrokes when activated by law enforcement authorities. (108)

Monitoring issues also arise in the context of automated intellectual property rights management. Proposals abound for "copyright management technologies" (sometimes unkindly dubbed "snitchware"), (109) which would record and in some cases disclose every time a user accessed a document, article, or even page of licensed material in order to finely assess charges. Similarly, digital watermarking systems, (110) which insert invisible customized tags into electronic documents, allow those documents to be tracked. Using various forms of these technologies, owners of valuable proprietary data can sell the information with less fear that it will be copied without payment. If the information is sold in encrypted form, along with a program or device that decrypts it every time a licensee wishes to view part of the content, charging can be done on a pay-per-view basis rather than requiring a large fee in advance. Leaving aside the issue of the effect on fair use, (111) monitoring for pricing purposes only raises privacy issues if information is recorded (and thus discoverable or subject to search and seizure) or reported to the licensor. If only the quantity of use is reported, rather than the particular pages viewed or queries run, user privacy is unaffected. When metering is conducted in real time, however, it is particularly difficult for a user to be confident about what is being reported. If, for example, a copyright management system connects via the Internet to the content owner to ensure billing or even payment before access, then only the most sophisticated user will be able to determine how much information is being transmitted. The temptation to create user profiles for marketing purposes may be quite great.

Already, programs that quietly report, to a central registry in real time, every URL viewed are common. Click on "what's related" in the default configuration of Netscape 4.06 or above and every URL visited in that browser session will be reported back to a server at Netscape/AOL. Alone, this information only tells Netscape which sites people consider related to others; it helps them construct a database they can use to guide future surfers. But this data, in conjunction with cookies that recorded personal information, could be used to build extensive dossiers of individual users. There is no evidence that Netscape does this, but there is no technical obstacle preventing it. (112)

d. Hardware.

Hardware manufacturers are also deploying privacy-compromising features in a wide variety of devices. The General Motors corporation has equipped more than six million vehicles with (until recently) secret devices, akin to airplane flight data recorders known as "black boxes," that are able to record crash data. First introduced in 1990, the automobile black boxes have become progressively more powerful. The 1994 versions:

record[ed] 11 categories of information, including the amount of deceleration, whether the driver was wearing a seat belt, whether the airbag was disabled, any system malfunctions recorded by the on-board computer at the time of the crash and when the airbag inflated. A more sophisticated system installed in some 1999 models also records velocity, brake status and throttle position for five seconds before impact. (113)

Other manufacturers include less elaborate data recorders in their cars.

Makers of computer chips and ethernet card adapters used for networking and for high-speed Internet access routinely build in unique serial numbers to their hardware, which can then be accessed easily over the web.

Each Intel Pentium III chip has a unique identification number. Intel originally designed the chip ID to function continuously and be accessible to software such as web browsers. (114) The intention appears to have been to make electronic anonymity impossible. Anonymous users might, Intel reasoned, commit fraud or pirate digital intellectual property. (115) With a unique, indelible ID number on each chip, software could be configured to work only on one system. Users could only mask their identities when many people used a single machine, or when one person used several machines. The unique ID could also serve as an index number for web sites, cookie counters, and other means of tracking users across the Internet.

The revelation that Intel was building unique serial numbers into Pentium III chips caused a small furor. In response, Intel announced it would commission a software program that would turn off the ID function. (116) However, Intel's software can be circumvented by a sufficiently malicious program and the ID number surreptitiously broadcast in a cookie or by other means. (117)

Intel is not the only company to put unique serial numbers into its communication-related products. For many years, all ethernet cards, the basis for networks and most DSL (118) connections, had a "Media Access Control" (MAC), a six-byte (usually represented as twelve alphanumeric characters) ID number built into them. This unique, unchangeable number is important for networks, because it forms part of each device's address, ensuring that no two devices get confused with each other, and that no data packets get misdelivered. The privacy issues become most acute when such a card is part of a computer that is used on the Internet or other communications networks, because the number can be used to identify the computer to which the ethernet card is attached.

Indeed, the new Internet Protocol version 6 ("IPv6"), (119) which will gradually replace the current Internet protocol, contemplates using an ethernet card's unique ID to create a globally unique identifier ("GUID"). The IPv6 standard requires software to include a GUID in the header of every Internet communication (email, web browsing, chat, and others). Computers with an ethernet card would create a GUID by combining the unique ID number assigned to the card's manufacturer with a unique number assigned to the card in the factory. (120) Thus, "[e]very packet you send out onto the public Internet using IPv6 has your fingerprints on it. And unlike your IP address under IPv4, which you can change, this address is embedded in your hardware. Permanently." (121) In response to criticism, the standard-setting bodies are reconsidering revisions which would allow users--if they are savvy enough to do so--to pick a random number to replace the GUID from time to time. (122) But this modification is still under consideration and would not, apparently, be the default.

Even before IPv6 was introduced, some software products, notably Word 97, Excel 97, and PowerPoint 97, routinely embedded a unique ID number into every document. If a computer had an ethernet card, the programs used its MAC, much like IPv6. (123) As a result, it became possible for law enforcement and others to trace the authorship of seemingly anonymous documents if they could match the MAC to a computer. This matching task was made easier by another Microsoft product: The initial version of the Windows 98 registration wizard transmitted the unique ID to Microsoft; visitors to the Microsoft web site who had previously registered were then given a cookie with the ID number. (124) As a result, the Microsoft ID not only identified a computer, but tied it directly to an individual's personal data. These features were not documented. (125) Although there is no reason to believe that Microsoft used the information for anything other than tracking the use of its website, there are powerful financial and commercial incentives for corporations to collect this information. A filing in a recent lawsuit claims that user information collected by Yahoo was worth four billion dollars. (126) Not surprisingly, other companies, including RealNetworks and Amazon.com, have been collecting, or considering collecting, similar personal information. (127) Indeed, it is possible that Microsoft's data collection activity was a dry run for something more elaborate. Documents disclosed during the Microsoft antitrust case revealed that Microsoft had considered switching to an "annuity model" by which users would have paid an annual fee for a Windows license in future versions of the operating system. (128) Annual billing would most likely have required registering and identifying users.

Hardware with built-in ID numbers is not yet ubiquitous, but proposals for expanding its use are increasingly common, in part because law enforcement and others fear that anonymous activities lead to criminality and antisocial behavior. For example, the fear that people could use color copiers to counterfeit United States currency has spurred makers of color copiers to put invisible, unique ID numbers in each machine in order to trace counterfeits. (129) The ID number appears in all color copies, making every copied document traceable to its originating machine. Because the quality of personal color printers continues to improve, the U.S. Treasury Department has become increasingly concerned that common inkjet color printers may become good enough for counterfeiters. As a result, the Treasury has begun to investigate the possibility of requiring printer manufacturers to build tracing information into all color printers. (130)

Ubiquitous hardware ID numbers are probably inevitable because they will enable smart homes and offices. Consider, for example, the smart refrigerator: Its computer can automatically display a shopping list of what is running short. The list can then automatically be sent to a shop over the Internet. A smart fridge also can be linked to an online cookbook to suggest suitable recipes depending upon its contents. (131) Once every food is tagged, (132) and the fridge knows its expiration date, the smart fridge can even be programmed to remind you to throw out milk that outlasts its sell-by date. Smart home and office applications such as the smart fridge or the smart office supply cabinet will provide a cornucopia of marketing data, and the information officers of food suppliers, and others, are already devising plans to get and use that information. (133) Ultimately the information may be of interest to many others as well. Insurance companies, for example, might like to know if there are any cigarette packages in the insured's home, whether she snacks regularly, and how often she eats fatty foods.

3. Biometrics.

Technology for identifying people is advancing at least as quickly as technology for identifying machines. With technologies for distinguishing human irises, fingerprints, faces, or other body parts (134) improving quickly, it seems increasingly attractive to use the "body as password" rather than base security on a passphrase, a PIN, or a hardware token such as a smart card. (135) Biometrics can be used for identification (who is this?) or authentication (what permissions does this person have?). (136)

To the extent that reliance on biometric identifiers may prevent information from being stolen or improperly disclosed, it is a privacy-enhancing technology. Some banks now use iris scans to determine whether a person is entitled to withdraw money from an ATM. (137) The United States government uses biometric identifiers in the border crossing identification cards issued to aliens who frequently travel to and from the United States on business, (138) as do several states seeking to prevent fraudulent access to welfare and other benefits. (139)

Despite the potential to enhance privacy, biometrics pose a two-pronged threat. First, a biometric provides a unique identifier that can serve as a high-quality index for all information available about an individual. The more reliable a biometric identifier, the more it is likely to be used, and the greater the amount of data likely to be linked to it. (140) Because a biometric is a part of the person, it can never be changed. It is true that current indexes, such as social security numbers, are rarely changed, which is why they make good indexes, but in extreme cases one can leave the country or join a witness protection program. As far as we know, changing an iris or a fingerprint is much more difficult. Second, some biometrics, particularly those that involve DNA typing, disclose information about the data subject, such as race, sex, ethnicity, propensity for certain diseases, and (as the genome typing improves) even more. (141) Others may provide the capability to detect states of mind, truthfulness, fear, or other emotions. (142)

DNA is a particularly powerful identifier. It is almost unique (143) and (so far) impossible to change. A number of state and federal databases already collect and keep DNA data on felons and others. (144) Attorney General Janet Reno recently asked the National Commission on the Future of DNA Evidence whether a DNA sample should be collected from every person arrested in the United States. Under this proposal, DNA information would become part of a permanent, and sizable, national database: More than fifteen million people were arrested in the United States in 1997 alone. (145) Such a plan is far from unthinkable--the Icelandic government considered a bill to compile a database containing medical records, genetic information, and genealogical information for all Icelanders. (146)

4. Sense-enhanced searches.

Sense-enhanced searches rely on one or more technologies to detect that which ordinarily could not be detected with un-aided human senses. These searches differ from surveillance in public places because, with a few exceptions such as airport body searches, sense enhanced searches are not yet routine, perhaps because of the rarity or expense of the necessary equipment. Instead, the typical sense-enhanced search is targeted at someone or something specific, or carried out at specific and usually temporary locations. Unlike home or office monitoring, which usually requires equipment inside the location of interest, many sense-enhanced searches allow someone on the outside to see what is happening inside a building, a package, or even clothing. Because there is no "entry" as the term is commonly defined, nor a physical intrusion, and because many of the technologies rely on emanations that are not coerced by the observer, these technologies may be impermissible under both the Fourth Amendment and private law trespass law. Sense-enhanced search technology is changing rapidly, raising doubts as to what constitutes a reasonable expectation of privacy in a world where we are all increasingly naked and living in transparent homes.

Governments appear to be the primary users of sense-enhanced searches, but many of the technologies are moving into the private sector as prices decrease.

a. Looking down: satellite monitoring.

Once the sole property of governments, high-quality satellite photographs in the visible spectrum are now available for purchase. The sharpest pictures on sale today are able to distinguish objects two meters long, (147) with a competing one-meter resolution service planned for later this year. (148)

Meanwhile, governments are using satellites to regulate behavior. Satellite tracking is being used to monitor convicted criminals on probation, parole, home detention, or work release. Convicts carry a small tracking device that receives coordinates from global positioning satellites ("GPS") and communicates them to a monitoring center. (149) The cost for this service is low, about $12.50 per target per day. (150)

Meanwhile, the United Kingdom is considering the adoption of a GPS-based system, already field tested in the Netherlands and Spain, (151) to prevent speeding. Cars would be fitted with GPS monitors that would pinpoint the car's exact location, link with a computer built into the car containing a database of national roads, identify the applicable speed limit, and instruct a governor built into the vehicle to stop the fuel supply if the car exceeds a certain speed. (152) GPS systems allow a receiver to determine its location by reference to satellites, but do not actually transmit the recipient's location to anyone. (153) The onboard computer could, however, permanently record everywhere the car goes, if sufficient storage were provided. The United Kingdom proposal also calls for making speed restrictions contextual, allowing traffic engineers to slow down traffic in school zones, after accidents, or during bad weather. (154) This contextual control requires a means to load updates into the computer; indeed, unless the United Kingdom wished to freeze its speed limits for all time, some sort of update feature would be essential. Data integrity validation usually relies upon two-way communication. Once such a mechanism exists between the speed control system and a central authority, the routine downloading of vehicle travel histories would become a real possibility. And even without two-way communication, satellite-control over a vehicle's fuel supply would allow immobilizing vehicles for purposes other than traffic control. For example, cars could be stopped for riot control or if being chased by police, parents would have a new way of "grounding" children, and hackers would have a new target.

That a government can track a device designed to be visible by satellite does not, of course, necessarily mean that it could track an individual without one could be tracked by satellite in the manner depicted by the film Enemy of the State. However, a one-meter resolution suggests that it should be possible to track a single vehicle if a satellite were able to provide sufficient images, and satellite technology is improving rapidly.

The public record does not disclose how accurate secret spy satellites might be, nor what parts of the spectrum they monitor other than visible light. The routine privacy consequences of secret satellites is limited, because governments tend to believe that using the results in anything less than extreme circumstances tends to disclose their capabilities. As the private sector catches up with governments, however, technologies developed for national security purposes will gradually become available for new uses.

b. Seeing through walls.

It may be that "the house of every one is to him as his castle and fortress, as well for his defence against injury and violence, as for his repose," (155) but the walls of that fortress are far more permeable today than ever before. Suitably equipped observers can now draw informed conclusions about what is occurring within a house without having to enter it. Most of these technologies are passive. They do not require the observer to shine a light or any other particle or beam on the target; instead they detect preexisting emanations.

Thermal imaging, for example, allows law enforcement to determine whether a building has "hot spots." In several cases, law enforcement agencies have argued that heat concentrated in one part of a building tends to indicate the use of grow lights, which in turn (they argue) suggests the cultivation of marijuana. The warrantless discovery of hot spots has been used to justify the issuance of a warrant to search the premises. Although the courts are not unanimous, most hold that passive thermal imaging that does not reveal details about the inside of the home does not require a warrant. (156)

The telephone is not the only electronic device that allows new forms of monitoring. Computer monitors broadcast signals that can be replicated from a considerable distance. (157) Computer programs and viruses can use this capability to surreptitiously broadcast information other than what is displayed on the screen. These emissions are so powerful that one of the academics who first documented them suggested that Microsoft have its licensed programs "radiate a one-way function of its license serial number. This would let an observer tell whether two machines were simultaneously running the same copy of Word, but nothing more." (158) Microsoft, however, apparently was not interested in a copy protection scheme that would have required it to employ a fleet of piracy detection monitors cruising the world's highways or hallways. Users can protect against the crudest types of this distance monitoring by employing "Tempest fonts." These special fonts will protect the user's privacy by displaying to any eavesdropper a text different from the one actually displayed on the users' screen. (159)

c. Seeing through clothes.

Passive millimeter wave imaging reads the electromagnetic radiation emitted by an object. (160) Much like an X-ray, this technology can specifically identify the radiation spectrum of most objects carried on the person, even those in pockets, under clothes, or in containers. (161) It thus allows the user to see through clothes, and conduct a "remote frisk" for concealed weapons, (162) or other contraband. (163) Imagers are available as handheld scanners, visible gateway scanners, or in hidden surveillance models. (164)

A similar product, which is not passive, uses low levels of X-rays to screen individuals for concealed weapons, drugs, and other contraband. The makers of "BodySearch" boast that two foreign government agencies are using it for both detection and head-of-state security, and that a state prison is using it as a substitute for strip searching prisoners. The United States customs service is using it as an alternative to pat-down searches at JFK airport, prompting complaints from the ACLU. According to the ACLU, "BodySearch" provides a picture of the outline of a human body, including genitals: "If there is ever a place where a person has a reasonable expectation of privacy, it is under their clothing." (165) The sample photo provided by BodySearch makers American Science and Engineering, Inc. is fairly revealing. (166) Still newer devices such as a radar skin scanner can distinguish all anatomical features over one millimeter, making it possible to "see through a person's clothing with such accuracy that it can scan someone standing on the street and detect the diameter of a woman's nipples, or whether a man has been circumcised." (167)

d. Seeing everything: smart dust.

Perhaps the ultimate privacy invasion would be ubiquitous miniature sensors floating around in the air. Amazingly, someone is trying to build them: The goal of the "smart dust" project is "to demonstrate that a complete sensor/communication system can be integrated into a cubic millimeter package" capable of carrying any one of a number of sensors. While the current prototype is seven millimeters long (and does not work properly), the engineers hope to meet their one cubic millimeter goal by 2001. At that size, the "motes" would float on the breeze, and could work continuously for two weeks, or intermittently for up to two years. A million dust motes would have a total volume of only one liter. (168)

Although funded by the Pentagon, the project managers foresee a large number of potential civilian as well as military applications if they are able to perfect their miniature sensor platform. Among the less incredible possibilities they suggest are: battlefield surveillance, treaty monitoring, transportation monitoring, scud hunting, inventory control, product quality monitoring, and smart office spaces. They admit, however, that the technology may have a "dark side" for personal privacy. (169)

II. Responding to Privacy-Destroying Technologies

The prospect of "smart dust," of cameras too small to see with the naked eye, evokes David Brin's and Neal Stephenson's vision of a world without privacy. (170) As the previous discussion demonstrates, however, even without ubiquitous microcameras, governments and others are deploying a wide variety of privacy-destroying technologies. These developments raise the immediate question of the appropriate legal and social response.

One possibility is to just "get over it" and accept emerging realities. Before adopting this counsel of defeat, however, it seems prudent to explore the extent to which the law offers strategies for resistance to data collection. The next part of this article thus offers a survey of various proposals for a legal response to the problem of ubiquitous personal data collection. Because any legal reform designed to protect informational privacy arises in the context of existing law, the discussion begins by outlining some of the major constraints that must shape any practicable response to privacy-destroying technologies.

A. The Constraints

An effective response to privacy-destroying technologies, in the United States at least, is constrained by three factors: first, market failure caused by myopic, imperfectly informed consumers; second, a clear, correct vision of the First Amendment; and third, fear.

1. The economics of privacy myopia.

Under current ideas of property in information, consumers are in a poor legal position to complain about the sale of data concerning themselves. (171) The original alienation of personal data may have occurred with the consumer's acquiescence or explicit consent. Every economic transaction has at least two parties; in most cases, the facts of the transaction belong equally to both. (172) As evidenced by the existence of the direct mail industry, both sides to a transaction generally are free to sell details about the transaction to any interested third party.

There are exceptions to the default rule of joint and several ownership of the facts of a transaction, but they are relatively minor. Sometimes the law creates a special duty of confidentiality binding one of the parties to silence. Examples include fiduciary duties and a lawyer's duty to keep a client's confidence. (173) Overall, the number of transactions in which confidentiality is the legal default is relatively small compared to the total number of transactions in the United States.

In theory, the parties to a transaction can always contract for confidentiality. This is unrealistic due because consumers suffer from privacy myopia: they will sell their data too often and too cheaply. Modest assumptions about consumer privacy myopia suggest that even Americans who place a high value on information privacy will sell their privacy bit by bit for frequent flyer miles. Explaining this requires a brief detour into stylized microeconomics.

Assume that a representative consumer engages in a large number of transactions. Assume further that the basic consumer-related details of these transactions--consumer identity, item purchased, cost of item, place and time of sale--are of roughly equivalent value across transactions for any consumer and between consumers, and that the marginal value of the data produced by each transaction is low on its own. In other words, assume we are limiting the discussion to ordinary consumer transactions, not extraordinary private ones, such as the purchase of anticancer drugs. Now assume that aggregation adds value: Once a consumer profile reaches a given size, the aggregate value of that consumer profile is greater than the sum of the value of the individual data. Most heroically, assume that once some threshold has been reached the value of additional data to a potential profiler remains linear and does not decline. Finally, assume that data brokers or profile compilers are able to buy consumer data from merchants at low transactions costs, because the parties are repeat players who engage in numerous transactions involving substantial amounts of data. Consumers, however, are unaware of the value of their aggregated data to a profile compiler. With one possible exception, the assumption that the value of consumer data never declines, these all seem to be very tame assumptions.

In an ordinary transaction, a consumer will value a datum at its marginal value in terms of lost privacy. In contrast, a merchant, who is selling it to a profiler, will value it at or near its average value as part of a profile. Because, according to our assumptions, the average value of a single datum is greater than the marginal value of that datum (remember, aggregation adds value), a consumer will always be willing to sell data at a price a merchant is willing to pay.

The ultimate effect of consumer privacy myopia depends upon a number of things. First, it depends on the intrusiveness of the profile. If the profile creates a privacy intrusion that is noticeably greater than disclosing an occasional individual fact--that is, if aggregation not only adds value but aggravation--then privacy myopia is indeed a problem. I suspect that this is, in fact, the case and that many people share my intuition. It is considerably more intrusive to find strangers making assumptions about me, be they true or painfully false, than it is to have my name and address residing in a database restricted to the firms from which I buy. On the other hand, if I am just people who object to being profiled are unusual, and aggregation does not usually cause additional harm to most people's privacy, the main consequence of privacy myopia is greatly reduced. For some, it is only distributional. Consumers who place a low value on their information privacy--people for whom their average valuation is less than the average valuation of a profiler--would have agreed to sell their privacy even if they were aware of the long-run consequences. The only harm to them is that they have not extracted the highest price possible. But consumers who place a high value on information privacy will be more seriously harmed by their information myopia. Had they been aware of the average value of each datum, they might have preferred not to sell.

Unfortunately, if the marginal value (174) to the consumer of a given datum is small, then the value of not disclosing that datum will in most cases be lower than either the cost of negotiating a confidentiality clause (if that option even exists), or the cost of forgoing the entire transaction. (175) Thus, in the ordinary case, absent anything terribly revealing about the datum, privacy clauses are unlikely to appear in standard form contracts, and consumers will accept this. (176) Furthermore, changing the law to make consumers the default owners of information about their economic activity is unlikely to produce large numbers of confidentiality clauses in the agora. In most cases, all it will do is move some of the consumer surplus from information buyers to information producers or sellers as the standard contracts forms add a term in which the consumer conveys rights to the information in exchange for a frequent flyer mile or two.

In short, if consumers are plausibly myopic about the value of a datum--focusing on its marginal value rather than its average value, which is difficult to measure--but profilers are not and the data are more valuable in aggregate, then there will be substantial over-disclosure of personal data even when consumers care about their informational privacy.

If this stylized story is even somewhat accurate, it has unfortunate implications for many proposals to change the default property rules regarding ownership of personal data in ordinary transactions. The sale will tend to happen even if the consumer has a sole entitlement to the data. It also suggests that European-style data protection rules should have only a limited effectiveness, primarily for highly sensitive personal data. The European Union's data protection directive allows personal data to be collected for reuse and resale if the data subject agrees; (177) the privacy myopia story suggests that customers will ordinarily agree except when disclosing particularly sensitive personal facts with a high marginal value.

On the other hand, the privacy myopia story suggests several questions for further research. For example, the myopia story suggests that we need to know how difficult it is to measure the value of privacy and, once that value has been calculated, how difficult it is to educate consumers to value data at its average rather than marginal value. Can information provide a corrective lense? (178) Or, perhaps consumers already have the ability to value the privacy interest in small amounts of data if they consider the long term consequences of disclosure.

Consumers sometimes have an interest in disclosure of information. For example, proof of credit-worthiness tends to improve the terms upon which lenders offer credit. The myopia story assumes this feature away. It would be interesting to try to measure the relative importance of privacy and disclosure as intermediate and final goods. If the intermediate good aspect of informational privacy and disclosure substantially outweighed their final good aspect, the focus on blocking disclosure advocated in this article might be misguided. European data-protection rules, which focus on requiring transparency regarding the future uses of gathered data, might be the best strategy.

It would also be useful to know much more about the economics of data profiling. In particular, it would be helpful to know how much data it takes to make a profile valuable--at what point does the whole exceed the sum of the data parts? Additionally, it would be important to know whether profilers regularly suffer from data overload, and to what extent there are diminishing returns to scale for a single subject's personal data. Furthermore, it could be useful to know whether there might be increasing returns to scale as the number of consumers profiled increases. If there are increasing returns to scale over any relevant part of the curve, the marginal consumer would be worth extra. It might follow that in an efficient market, profilers would be willing to pay more for data about the people who are most concerned about informational privacy.

There has already been considerable work on privacy-enhancing technologies for electronic transactions. (179) There seems to be a need for more research, however, to determine which types of transactions are best suited to using technologies such as information intermediaries. The hardest work, will involve finding ways to apply privacy-enhancing technologies to those transactions that are not naturally suited to them.

Perhaps the most promising avenue, is to design contracts and technologies that undercut the assumptions in the myopia story. For example, one might seek to lower the transaction costs of modifying standard form contracts, or of specifying restrictions on reuse of disclosed data. The lower the cost of contracting for privacy, the greater the chance that such a cost will be less than the marginal value of the data (note that merely lowering it below average cost fails to solve the underlying problem, because sales will still happen in that price range). If technologies, such as P3P, (180) reduce the marginal transactions costs involved in negotiating the release of personal data to near zero, even privacy myopics will be able to express their privacy preferences in the P3P-compliant part of the marketplace.

2. First Amendment.

The First Amendment affects potential privacy-enhancing rules in at least three ways: (1) most prohibitions on private data-gathering in public (i.e. surveillance) risk violating the First Amendment (conversely, most government surveillance in public appears to be unconstrained by the Fourth Amendment) (181); (2) the First Amendment may impose limits on the extent to which legislatures may restrict the collection and sale of personal data in connection with commercial transactions; and (3) the First Amendment right to freedom of association imposes some limits on the extent to which the government may observe and profile citizens, if only by creating a right to anonymity in some cases. (182)

One of the arguments advanced most strenuously in favor of the proposition that the privacy battle is now lost to ubiquitous surveillance is that "information wants to be free," and that once collected, data cannot in practice be controlled. Although the most absolutist versions of this argument tend to invoke data havens or distributed database technology, the argument also draws some force from the First Amendment--although perhaps a little less than it used to.

a. The First Amendment in public places.

Perhaps the critical question shaping the legal and social response to new surveillance technology is the extent to which the government can limit the initial collection of personal data in public. Once information is collected, it is hard to control, and almost impossible to erase once it gets into distributed databases. Legal rules prohibiting data collection in public are not the only possible response; defenses against collection might also include educating people as to the consequences of disclosure or deploying countertechnologies such as scramblers, detectors, or masks. (183) Unlike a legal solution, however, most technological responses involve shifting costs to the data subject. The cost of compliance with laws restricting data collection is likely to fall on the observer, at least initially. The difficulty is writing rules that are consistent with both the First Amendment and basic policies of freedom.

Professor Jerry Kang recently proposed and defended a statute limiting the collection of personal information in cyberspace. As seen from the discussion in part I, there is no doubt that the collection of personal information in cyberspace is already a serious threat to information privacy, and that this threat will continue to grow by leaps and bounds. Professor Kang's statute would be a valuable contribution to information privacy if it were adopted. But even if its economic importance is growing, cyberspace is still only a small part of most daily lives. Part I demonstrates that a great deal of the threat to information privacy is rooted firmly in "meatspace" (the part of life that is not cyberspace). The problem is considerably more general. Indeed, cyberspace privacy and meatspace privacy are related, since the data drawn from both will be matched in databases. The Kang proposal, already unlikely to be adopted by a legislature, would need to be radically generalized to meatspace just to protect the status quo ante. Even if a legislature could be persuaded to adopt such a radically pro-privacy initiative, it is not at all clear that such an ambitious attempt to create privacy rights in public places would be constitutional.

The core question is whether a legislature could constitutionally change the default rules, which hold that what is visible is public, in order to increase informational privacy. Current doctrine does not make clear the extent to which Congress may seek to preserve, or even expand, zones of privacy in public places (or informational privacy relating to transactions) by making it an offense to use a particular technology to view or record others. This may be because attempts to expand the zone of privacy in the United States by legislation are still relatively rare. Prohibiting the use of technologies that are not already commonplace prevents the public from becoming desensitized, and it ensures a reasonable expectation of being able to walk in public without being scanned by them. Similarly, prohibiting the use of commonplace technologies also creates a (legally) reasonable expectation that others will follow the law, and that restricted technologies will not be used. At some undefined point, perhaps quite close to its inception, any such attempt will begin to intrude on core First Amendment values.

In peacetime, the First Amendment allows only the lightest restrictions upon the ordinary gathering of information in public places (or upon repeating of such information). Other than cases protecting bodily integrity, the constitutional right to privacy is anemic, especially when compared to the First Amendment's protection of the rights to gather and disseminate information. This is not necessarily a bad thing, because most rules designed to protect privacy in public places would probably have a substantial harmful effect upon news gathering and public debate. Nevertheless, there are a few areas where light privacy-enhancing regulation might not impinge upon core First Amendment values. There are also areas where laws that actively hinder privacy might be reformed.

The constitutional status of a regulation of data collection has implications for the regulation of its subsequent uses. If it were unconstitutional to impose a restriction upon the initial collection, then it would be difficult to impose constitutionally acceptable limitations on downstream users of the data. When the government is not the data proprietor, the constitutional justification for a rule limiting, for example, the dissemination of mall camera photos or the sale of consumer profiles, will be closely tied, and sometimes identical, to the justification for banning the data collection in the first place. If restrictions upon the initial collection could be imposed constitutionally, then justifications for imposing conditions that run with the data are easy to see. If, on the other hand, the data were lawfully acquired, justifying rules that prevent it from being shared (or, perhaps, even used by the initial collector) is far less onerous if one can categorize the dissemination of the data as the shipment of a data-good in commerce rather than as a publication or other speech act.

The recent and unanimous Supreme Court decision in Reno v. Condon (184) could be read to suggest that the act of transmitting personal data for commercial purposes is something less than even commercial speech. In Condon, the Court upheld the Driver's Privacy Protection Act ("DPPA") against claims asserted under the Tenth and Eleventh Amendments. In so doing, the Court agreed with the petitioner that "personal, identifying information that the DPPA regulates is a 'thin[g] in interstate commerce,' and that the sale or release of that information in interstate commerce is therefore a proper subject of congressional regulation" under Congress's Commerce Clause powers. (185) The circumstances and posture of Condon suggest, however, that this reading, which would be a radical break with existing First Amendment principles, is not justified.

Gathering information in public. The First Amendment protects the freedom of speech and of the press, but does not explicitly mention the right to gather information. However, both the Supreme Court and appellate courts have interpreted the First Amendment to encompass a right to gather information. (186) The right is not unlimited. It does not, for example, create an affirmative duty on the government to make information available. (187)

As a general matter, if a person can see it in public, she can write about it and talk about it. It does not inevitably follow that because she may share her natural sense impressions, or her written recollections, she may also photograph it or videotape events and then publish mechanically recorded images and sounds. Most courts that have examined the issue, however, have held that she may do so, subject only to very slight limitations imposed by privacy torts. (188) "[C]ourts have consistently refused to consider the taking of a photograph as an invasion of privacy where it occurs in a public for a [sic]." (189) "Thus, in order for an invasion of privacy to occur, '[t]he invasion or intrusion must be of something which the general public would not be free to view.'" (190)

Perhaps it might be constitutional to prohibit the use of devices that see through clothes on the theory that there is a limited First Amendment exception allowing bans on outrageous assaults upon personal modesty. On the other hand, the government's use of passive wave imaging, which see through clothes, suggests that the executive branch believes either that there is no constitutional problem, or that the problem can be solved by offering subjects the alternative of an (equally intrusive?) patdown search. (191) Or, perhaps, the government's ability to ban intrusive monitoring sweeps more broadly. The correct doctrinal answer is unclear because there have been no privacy-enhancing statutes seeking to block systematic data collection in public places. Ultimately, the answer may turn on just how outrageous high-tech surveillance becomes. Meanwhile, however, one must look to privacy tort cases in which the First Amendment was raised as a defense in order to get an indication as to the possible sweep of the First Amendment in public view cases.

Tort-based attempts to address the use of privacy-destroying technologies in public places tend to focus either on the target, the type of information, or whether a person might reasonably expect not to be examined by such a technology. Unless they seek to define personal data as the property of the data subject, approaches that focus on the targeted individual tend to ask whether there is something private or secluded about the place in which the person was located that might create a reasonable expectation of privacy. If there was not, the viewing is usually lawful, and the privacy tort claim fails either because of the First Amendment or because the court says that the viewing is not a tort. Cases that focus on this type of information are usually limited to outrageous fact situations, such looking under clothes. (192) Cases that focus on reasonable expectations are the most likely to find that new technologies can give rise to a privacy tort, but these expectations are notoriously unstable: The more widely a technology is deployed and used, the less reasonable the expectation not to be subjected to it. Thus, for example, absent statutory change, courts would be unlikely to find a reasonable expectation of not being photographed in public, although it does not necessarily follow that one has no reasonable objection to being on camera all the time.

General regulation of new technologies such as thermal imaging or passive wave imaging seems unproblematic on First Amendment grounds so long as the regulation were to apply to all uses. The legislature can ban a technology that happens to be useful for news gathering if it does so through a law of general application, and the ban is reasonably tailored to achieve some legitimate objective. Privacy is surely such an objective. There are limits: It is doubtful, for example, that a ban on pens and pencils ostensibly designed to prevent note-taking in public would survive very long. On the other hand, it might well be constitutional to prohibit using, or even possessing, some devices that enhance natural sensory perceptions on privacy grounds. (193) Indeed, federal regulations already criminalize the sale of various types of spy gear. (194)

Whether the ban could be crafted to apply only to use or possession in public places is more dubious, because this cuts more closely against the First Amendment. Pragmatically, the results in court may depend upon the currency of the technology. It is inconceivable, for example, that a ban on capturing all photographic images in public could possibly be squared with the First Amendment, any more than could a ban on carrying a notebook and a pencil. Photography and television have become so much a part of ordinary life, as well as news gathering and reporting, that such a ban would surely be held to violate the freedom of the press and of speech, no matter how weighty the public interest in privacy. (195) Possibly, however, a more limited ban might be crafted to allow news gathering, but not twenty-four-hour surveillance. Such a rule might, for example, limit the number of images of a particular place per hour, day, or week, although lines would inevitably be difficult to draw. (196) A more practical rule, perhaps easier to enforce, would distinguish among various technologies.

Disseminating accurate information. Data collection becomes much less attractive if there are fewer buyers. One way to reduce the number of buyers is to make it illegal to buy, use, or reveal the respective data. Although the issue is not settled, there are good reasons to believe that the First Amendment would forbid most legislation criminalizing the dissemination or use of accurate information. While good for free speech, it makes any ban on data collection much more difficult to enforce. Conversely, if it is constitutional to penalize downstream uses of certain data, or even retention or publication, then enforcement of a collection ban becomes easier, and the incentives to violate the rule decrease.

The case for the constitutionality of a ban on the dissemination of some forms of accurate collected personal data is not negligible. It has long been assumed that sufficiently great government interests allow the legislature to criminalize the publication of certain special types of accurate information. Even prior restraint, and subsequent criminal prosecution, might be a constitutionally acceptable reaction to the publication of troop movements, or other similar information that might aid an enemy, during armed conflict. (197) In peacetime, copyright protections are justified by a specific constitutional derogation from the general principle of freedom of speech. (198) Some highly regulated industries, such as the securities industry, heavily restrict the speech of individuals, such as financial advisors or those with market-sensitive information, although the constitutionality of those rules is itself subject to some doubt and debate. (199) Generally, however, most truthful disclosures in the absence of a specific contractual duty to keep silent have usually been considered to be constitutionally protected.

The Supreme Court's decisions do not give blanket First Amendment protection to the publication of information acquired legally. Instead they have noted "[t]he tension between the right which the First Amendment accords to a free press, on the one hand, and the protections which various statutes and common-law doctrines accord to personal privacy against the publication of truthful information, on the other . . . ." (200) But, other than in cases involving intellectual property rights or persons with special duties of confidentiality, (201) the modern Court has struck down all peacetime restrictions on publishing true information that have come before it. The Court has kept open the theoretical possibility that a sufficiently compelling government interest might justify penalizing the publication of true statements. But, when faced with what might appear to be fairly compelling interests, such as protecting the privacy of rape victims, the Court has found the privacy interests insufficient to overcome the First Amendment. (202) This pattern suggests that a compelling interest would have to be weighty indeed to overcome First Amendment values, and that most, if not all, privacy claims would fail to meet the standard. As the Supreme Court stated in Smith v. Daily Mail Publishing Company, "state action to punish the publication of truthful information seldom can satisfy constitutional standards." (203) Furthermore, "if a newspaper lawfully obtains truthful information about a matter of public significance then state officials may not constitutionally punish publication of the information, absent a need to further a state interest of the highest order." (204)

In Cox Broadcasting Corp. v. Cohn, the Court considered a state statute making it a "misdemeanor to publish or broadcast the name or identity of a rape victim." (205) The Court held that, despite the very private nature of the information, the First Amendment protected the broadcasting of the name of a deceased, seventeen--year--old rape victim, because the reporter obtained the information from open public records. Relying upon § 867 of the Restatement of Torts by analogy, the Court noted that "the interests in privacy fade when the information involved already appears on the public record." (206)

Then, in Landmark Communications, Inc. v. Virginia, the Court struck down a state statute that criminalized the publication of the names of judges who were subject to confidential judicial disciplinary proceedings. (207) Although the newspaper received the information from someone who had no right to disclose it, the Supreme Court held that the First Amendment barred criminal prosecution of a newspaper for publishing accurate information about a matter of public concern. The Court noted, however, that the case did not involve a person with an obligation of confidentiality nor did it involve stolen information: "We are not here concerned with the possible applicability of the statute to one who secures the information by illegal means and thereafter divulges it." (208) And, in Smith v. Daily Mail Publishing Co., the Court said that the First Amendment protected a newspaper that lawfully interviewed witnesses, obtained the names of juvenile offenders, and then published those names in violation of a state statute requiring prior leave of court to do so. (209) Although the Court struck down the statute, it left open the possibility that publication of true and lawfully obtained information might be prohibited "to further an interest more substantial than is present here." (210) Similarly, in Florida Star v. B.J.F., the Court held that the First Amendment barred damages against a newspaper that published the name of a rape victim that it had lawfully acquired. (211)

More recently, in Rubin v. Coors Brewing Co., the Court struck down a statute preventing brewers from stating the alcohol content of beer, even though the Court found that the rule regulated commercial speech and thus was subject to less exacting scrutiny than regulations upon other types of speech. (212)

Thus, although the Supreme Court has "carefully eschewed reaching th[e] ultimate question" of whether truthful publications of news can ever be banned, or even the narrower question of "'whether truthful publications may ever be subjected to civil or criminal liability' for invading 'an area of privacy,'" (213) its decisions suggest that if there is a category of truthful speech that can constitutionally be banned, it is small indeed. The rule remains that "state action to punish the publication of truthful information seldom can satisfy constitutional standards." (214)

The Supreme Court's decisions leave open the possibility that the First Amendment might apply more strongly when facts are legally acquired, as opposed to originating in the illegal actions of another. Legally acquired facts have the highest protection: "[I]f a newspaper lawfully obtains truthful information about a matter of public significance then state officials may not constitutionally punish publication of the information, absent a need to further a state interest of the highest order." (215) Of the cases discussed above, only Landmark Communications involved a leak of information by someone with a legal duty to maintain its confidentiality. That case could be read to depend upon the heightened First Amendment protection for reporting upon important public issues, such as the honesty of judges.

Thus, the Supreme Court's cases are unclear as to whether a ban on the publication of illegally acquired information could fall within the presumably small class of regulations of truthful speech that satisfy constitutional standards. Whether it is ever possible to ban the publication of truthful information is unclear because the Court has never defined a "state interest of the highest order" (216) and because it has never decided whether illegally acquired information is (1) contraband per se, (2) contraband so long as a reasonable recipient should have known that it was illegally acquired, or (3) not contraband when laundered sufficiently, thus allowing publication under the protections of the First Amendment. (217) Which of the these is the law will have a great impact upon any attempt to regulate technologies of surveillance, and profiling technologies generally, because it affects how easily data can be laundered. (218)

A recent divergence between two circuits suggests that the Supreme Court may be asked to decide whether truthful information, obtained legally by the ultimate recipient, can nonetheless be contraband because it was originally acquired illegally. The D.C. Circuit and the Third Circuit recently reached opposite conclusions regarding the potential liability of a third party receiver of information that was illegally acquired by a second party. In both cases the information was an illegally intercepted telephone conversation on a matter of public interest; in both cases the information was ultimately passed to news media. In Boehner v. McDermott, (219) the D.C. Circuit held that a Congressman who acted as a conduit for a tape between the interceptor and a newspaper could be prosecuted for violating the Wiretapping Act, 18 U.S.C. § 2511. (220) The D.C. Circuit held that the prohibition on disclosure by third parties who had reason to know that the information had been illegally acquired was justified because: "Here, the 'substantial governmental interest' 'unrelated to the suppression of free expression' is evident." (221) The Wiretapping Act, the D.C. Circuit suggested, increases the freedom of speech because "[e]avesdroppers destroy the privacy of conversations. The greater the threat of intrusion, the greater the inhibition on candid exchanges. Interception itself is damaging enough. But the damage to free speech is all the more severe when illegally intercepted communications may be distributed with impunity." (222) In reaching this conclusion, the court characterized Congressman McDermott's action in being a conduit from the eavesdropper to the media as being a combination of speech and conduct. (223) Judge Randolph characterized his act of handing over the tape as being akin to receiving, and passing on, stolen property. (224) Judge Ginsburg concluded that Congressman McDermott's conduct was outside the Florida Star rule--that publishing truthful speech can only be punished if there is a state interest of the "highest order" (225)--because he knowingly and "unlawfully obtained" the tape. Intermediate scrutiny was therefore appropriate, and the statute could survive that test. (226) Judge Sentelle dissented on the grounds that the Florida Star rule applied and compelled strict scrutiny. The third-party provisions of the Wiretapping Act failed this more exacting test because they were not a content-neutral regulations. (227) Judge Sentelle also specifically disagreed with the majority's assertion that, as he put it, the government may punish a "publisher of information [who] has obtained the information in question in a manner lawful in itself but from a source who has obtained it unlawfully." (228) Although he conceded that the state interest in protecting the privacy of communications was compelling, he disagreed that a blanket ban on third-party uses was narrowly tailored to serve that end. (229)

The Third Circuit also divided 2-1, but this time a majority saw the issue much like Judge Sentelle. Bartnicki v. Vopper a tape of a cellular telephone conversation between two members of a teachers' union who were engaged in contentious pay negotiations with their school district. Someone recorded a conversation in which the two union members discussed going to the homes of school members and "blow[ing] off their front porches." (230) An unknown party left the tape in the mailbox of Jack Yocum, an opponent of the teachers' union, who then took it to the press. (231)

On an interlocutory appeal, the Third Circuit held 2-1 that Yocum (the conduit), and the subsequent publishers were protected by the First Amendment, even if they knew or had reason to know that the tape was illegally recorded. Although the Bartnicki majority tried to minimize the extent of its disagreement with the D.C. Circuit by focusing on the media defendants, who had no analogue in the Boehner case, (232) the Bartnicki majority still held that, the conduit of the information was protected every bit as much as the ultimate publishers. In so doing, the Bartnicki majority characterized Yocum's conduct as pure speech, rejecting Boehner's conclusion that it was more properly seen, at least partially, as conduct.

The first difficulty the Third Circuit had to overcome in reaching its conclusion was Cohen v. Cowles Media. In that case, the Supreme Court explained that "generally applicable laws do not offend the First Amendment simply because their enforcement against the press has incidental effects on its ability to gather and report the news." (233) Furthermore, "enforcement of such general laws against the press is not subject to stricter scrutiny than would be applied to enforcement against other persons or organizations." (234)

Despite holding that both Yocum and the media defendants engaged in pure speech, rather than a mixture of conduct and speech, the majority applied intermediate scrutiny because it found the Wiretap Act to be content-neutral. Intermediate scrutiny requires the court to weigh the government's interest, and the means selected to effectuate that interest, against countervailing First Amendment freedoms. In doing this balancing, the court determined it must ask whether the regulation is "narrowly tailored" to achieve a "significant governmental interest." (235) The dissent agreed that this was the right test, but rejected the majority's application of it to the facts. (236)

The government argued that the Act was narrowly tailored. The regulation of third-party use, it said, eliminates the demand for the fruits of the wrongdoer's labor. (237) The Bartnicki majority was not persuaded, calling the connection between the third-party provisions of the Wiretapping Act and the prevention of the initial interception of communications "indirect at best"; (238) in contrast, the dissent accepted the connection. (239)

The two sides thus differed on two issues: Whether handing over a tape is pure "speech," and whether the prophylactic effect of a prohibition on "disclosing" or "using" the contents of a communication would sufficiently discourage the illicit acquisition of communications, thus justifying the speech restriction at issue. Although there is something distasteful about considering accurate information contraband, even if hedged with a scienter requirement, it seems hard to believe that criminalizing the receipt and publishing of personal data, would have no discernable effect on the incentive to deploy privacy-destroying technologies. Rather, it seems likely that such a law would reduce the incentive to gather data in the first place, since buyers would be harder to find. The argument is weakest in a context such as Bartnicki, where the motives for disclosure are political rather than financial, and the matter is of public interest. The argument is surely stronger when applied to the disclosure of personal profile data. However, even if one accepts a connection between prohibiting the dissemination of information and discouraging its collection, it does not necessarily follow that privacy interests trump free speech rights. How the balance comes out will depend in part upon what sort of scrutiny is applied; that in turn will depend upon how the act of sharing the information is categorized.

A related issue raised by the Bartnicki/Boehner split is whether sharing information is always speech protected by the First Amendment, or whether there are occasions in which information is just a regulated commodity. Questions concerning what is properly characterized as "speech" surround the regulation of everything digital, from the sale of bulk consumer data to the regulation of software. (240)

In both Reno v. Condon and Los Angeles Police Department v. United Reporting Publishing Corp, (241) the Supreme Court treated government-owned personal data as a commodity that could be subjected to reasonable regulations on subsequent use.

Condon, however, is a decision about federalism. Neither side briefed nor argued the First Amendment issues concerning reuse or republication rights of data recipients, (242) so the issue remains open. (243) It remains so even though the Condon decision specifically relied upon and upheld the part of the DPPA that regulates the resale and redisclosure of drivers' personal information by private individuals (who have obtained that information from a state department of motor vehicles). (244) The DPPA, the Court stated, "regulates the States as the owners of databases." (245) It follows that similar rules could be applied to any database owner; indeed the Condon Court defended the DPPA against South Carolina's claim that it regulated states exclusively by noting that § 2721(c) regulates everyone who comes into contact with the data. (246)

In this light, the key factor in Condon may be the Court's decision that no one has a right to drivers' license data in the first place because the data belongs to the government. When examining cases involving the regulation of government data use and reuse, the Court adopts what amounts to an informational right/privilege distinction: If access to the data is a privilege, it can be regulated. The same logic appears in Los Angeles Police Department v. United Reporting Publishing Corp. (247) There, the Court upheld a statute requiring persons requesting arrestee data to declare that the arrestees' addresses would not be used directly or indirectly to sell a product or service. The Court reasoned that because California had no duty to release arrestee data at all, its decision to impose substantial conditions upon how the information would be used could survive at least a facial First Amendment challenge. (248)

If the Court adopts what amounts to a right/privilege distinction relating to government data, it is hard to see why the government's ability to impose conditions upon the use of its proprietary data should be any less than that of a private party, especially if those conditions arguably restrict speech. If data are just commodities, then data usage can be regulated by contract or license--a view that may import elements of a property theory into what had previously been the preserve of the First Amendment.

One view of the First Amendment, implied by Bartnicki, suggests that the government cannot impose sweeping restrictions on data dissemination in the name of privacy. The alternate view of the First Amendment, offered by Boehner, is more likely to allow the government to impose public limits on data dissemination and collection, and thus enhance privacy. (249) The Boehner vision, however, has potentially sweeping consequences unless some distinction can be delivered to prevent its application to publishers--which seems particularly dubious now that everyone is a publisher. (250) If it does apply publishers, then every newspaper that publishes a leak based upon classified information is at risk, and political reporting would be thoroughly chilled. (251) Just as a newspaper does not lose its status as protected speech because it is sold for a profit, other information, in other media, may be entitled to full First Amendment protection however it is transferred or sold.

b. The First Amendment and transactional data.

Transactional data--who bought what, when, where, and for how much--might be considered ordinary speech, commercial speech, or just an informational commodity. If transactional data is commercial speech, its regulation would be reviewed under the test enunciated in Central Hudson Gas & Electric Corp. v. Public Service Commission of New York:

For commercial speech to come within [the First Amendment], it at least must concern lawful activity and not be misleading. Next, we ask whether the asserted governmental interest is substantial. If both inquiries yield positive answers, we must determine whether the regulation directly advances the governmental interest asserted, and whether it is not more extensive than is necessary to serve that interest. (252)

Unlike public surveillance data, transactional data is usually collected in private by one of the parties to the transaction.

The government's ability to regulate privately generated speech relating to commerce is surprisingly underlitigated. This may be because there is not (yet) much relevant regulation in United States law. Under the common law, absent a special duty of confidentiality such as an attorney-client relationship, the facts of a transaction belong jointly and severally to the participants. If Alice buys a chattel from Bob, ordinarily both Alice and Bob are free to disclose this fact. (If Alice is famous, however, Bob may not use her likeness to advertise his wares without her permission, although he certainly can tell his friends that Alice was in his shop.) (253) Current doctrine suggests that speech relating to commerce is ordinary speech, if one applies "'the 'commonsense' distinction between speech proposing a commercial transaction, which occurs in an area traditionally subject to government regulation, and other varieties of speech.'" (254) On the other hand, the two most recent Supreme Court decisions relating to the regulation of personal data seem to imply that some transactional data is just a commodity, although the special circumstances of those decisions--the data was held by state or local governments--make generalization hazardous.

A very small number of statutes impose limits upon the sharing of private transactional data collected by persons not classed as professionals. The most important may be the Fair Credit Reporting Act. (255) In addition to impressing rules designed to make credit reports more accurate, the statute also contains rules prohibiting credit bureaus from making certain accurate statements about aged peccadilloes, although this restriction does not apply to reports requested for larger transactions. (256) More directly federal privacy-oriented commercial data statutes are rare. The Cable Communications Policy Act of 1984 forbids cable operators and third parties from monitoring the viewing habits of subscribers. Cable operators must tell subscribers what personal data is collected and, in general, must not disclose it to anyone without the subscriber's consent. (257) The "Bork Bill," formally known as the Video Privacy Protection Act, also prohibits most releases of customers' video rental data. (258)

Neither the privacy provisions of the Cable Act nor those of the Bork Bill appear to have been challenged in court. Some have suggested that this is evidence of their uncontroversial constitutionality. (259) More likely, this proves only that merchants in these two industries sell a great deal of sexually themed products and have no incentive to do anything to reduce their customers' confidence that their viewing habits will not become public knowledge. As a doctrinal matter, the statutes seem debatable. At least one other restriction upon the use of legally acquired transactional data failed on First Amendment grounds: When the state of Maine sought to require consumer consent before a firm could request a credit history, credit reporting agency Equifax won a judgment from the state supreme court holding that this was an unconstitutional restriction on its First Amendment right. (260)

3. Fear.

The most important constraint on an effective response to privacy-destroying technologies is fear. While greed for marketing data drives some applications, fear seems far more central, and much harder to overcome. Employers monitor employees because they are afraid workers may be doing unproductive or even illegal things. Communities appreciate cameras in public places because, whether cameras reduce or merely displace crime, one seems to be safer in front of the lens. Law enforcement officials constantly seek new tools to compete in what they see as an arms race with terrorists, drug dealers, and other criminals. (261)

It would be well beyond the scope of this article to attempt to determine which of these fears are well founded, but any political attempt to restrict personal data collection will have to confront these fears, whether they are well founded or not.

In arguing for increased privacy protection, one subtle fear also needs to be considered: Anything that increases a citizen's reasonable expectation of privacy will, under current doctrine, also increase the scope of Fourth Amendment protections. (262) Law enforcement officials are generally not required to obtain warrants in order to examine things that people have no reasonable expectation of keeping private; expanding the reasonableness of privacy expectations would mean that law enforcement officials would have to secure warrants before aiming new technologies at homes or bodies. The answer to the subtle fear may be a counter-fear: The more commonplace that ubiquitous surveillance becomes, the less the Fourth Amendment will be able to protect the average citizen.

B. Making Privacy Rules Within the Constraints

The result of these constraints on an effective response to privacy-destroying technologies is evident from the relatively limited protection against data acquisition provided by existing privacy rules in the United States. The constraints also suggest that several proposals for improving privacy protections are likely to be less effective than proponents might hope.

1. Nonlegal proposals.

Proposals for nonlegal solutions to the problem of privacy-destroying technologies must focus either on the data collector or on the data subject. Proposals focusing on the data collector usually invoke some version of enlightened self-regulation. Proposals focusing on the data subject usually invoke the rhetoric of privacy-enhancing technologies or other forms of self-help.

Self-regulation has proved to be a chimera. In contrast, privacy-enhancing technologies clearly have a role to play in combating privacy-destroying technologies, particularly in areas such as protecting the privacy of telecommunications and other electronic messaging systems. It is unlikely, however, that privacy-enhancing technologies alone will be sufficient to meet the multifaceted challenge described in Part I above. There may be some opportunities for the law to encourage privacy-enhancing technologies through subsidies or other legal means, but frequently the most important role for the law will be to remove existing obstacles to the employment of privacy-enhancing technologies or to ensure new ones do not arise.

a. "Self-regulation."

United States privacy policy has, until recently, been dominated by a focus on a very limited number of issues and, within those issues, a commitment to ask industry to self-regulate. (263) Since the economic incentive to provide strong privacy protections is either weak, nonexistent, or at least nonuniformly distributed among all participants in the marketplace, most serious proposals for self-regulation among market participants rely on the threat of government regulation if the data collectors fail to regulate themselves sufficiently. (264)

Without some sort of government intervention to encourage self-regulation, "[w]olves self-regulate for the good of themselves and the pack, not the deer." (265) Perhaps the most visible and successful self-regulatory initiative has been TRUSTe.com, a private third-party privacy-assurance system. TRUSTe.com provides a privacy "trustmark" to about 750 online merchants who pay up to $6900 per year to license it. (266) In exchange for the fee, TRUSTe verifies the existence of the online merchant's privacy policy, but does not conduct an audit. TRUSTe does, however, investigate complaints alleging that firms have violated their privacy policies. It currently receives about 375 complaints per year, and finds about twenty percent to be valid, triggering additional investigation. These decisions do not appear to be published save in exceptional circumstances. (267)

The meaningfulness of the "trustmark" recently was called into question by the actions of a trustmark holder. TRUSTe confirmed that thirteen million copies of trustmark holder RealNetworks' RealJukebox Software had created "globally unique identifiers" ("GUIDs") and transmitted them to RealNetworks via the Internet every time the software was in use. The GUID could be associated with the user's registration information to create a profile of their listening habits. (268) RealNetworks' privacy policy disclosed none of these facts. Nevertheless, once they came to light, RealNetworks kept its "trustmark" because the data collection was a result of downloaded software, and not anything on RealNetworks' web page. Both the company's web privacy policy and its accompanying "trustmark" applied only to data collection via its web pages rather than Internet-related privacy intrusions. (269) A similar distinction between data collected via a web page and data collected by user-run software allowed Microsoft to keep its "trustmark" after the discovery that its registration software sent a GUID and accompanying user data during Windows 98 registration, even when the user told it not to. (270) TRUSTe announced, however, that it was developing a pilot software privacy program with RealNetworks. Although the announcement did not actually say that the program would be expanded to other companies, much less when, it implied that it would. (271)

The RealNetworks incident followed an earlier, similar fiasco in which the FTC settled a complaint against GeoCities. (272) The FTC charged that GeoCities "misrepresented the purposes for which it was collecting personal identifying information from children and adults." (273) According to the FTC, GeoCities promised customers that their registration information would be used only to "provide members the specific advertising offers and products or services they requested and that the 'optional' information [education level, income, marital status, occupation, and interests] would not be released to anyone without the member's permission." (274) In fact, however, GeoCities created a database that included "email and postal addresses, member interest areas, and demographics including income, education, gender, marital status, and occupation" and disclosed customer data to marketers. (275) In settling the case, GeoCities issued a press release denying the allegations. GeoCities then changed its privacy policy to state that user data might be disclosed to third parties with user consent (the previous policy also implied this; in any event the FTC charge was that disclosures occurred without consent). TRUSTe, which had issued a trustmark to GeoCities during the FTC investigation, did not remove it. (276)

Critics suggest that TRUSTe's unwillingness to remove or suspend a trustmark results from its funding structure. Firms license the trustmark; in addition, some corporate sponsors, including Microsoft but neither RealNetworks nor GeoCities, contribute up to $100,000 per year in support. (277) If TRUSTe were to start suspending trustmarks, it would lose revenue; if it were to get a reputation for being too aggressive toward clients, they might decide they are better off without a trustmark and the attendant hassle. In the absence of a meaningful way for consumers to evaluate the meaning of a trustmark or competing certifications, (278) TRUSTe certainly has no economic incentive to be tough on its funding sources.

Perhaps the most troubling aspect of the TRUSTe story is that TRUSTe's defense of its actions has a great deal of merit: The expectations loaded upon it, and perhaps the publicity surrounding it, vastly exceed its modest self-imposed mission of verifying members' web-site privacy assertions, and bringing members into compliance with their own often quite limited promises. (279) Taken on its own terms, TRUSTe is a very modest first initiative in self-regulation. That said, TRUSTe's nonprofit status, the sponsorship of public interest groups such as the Electronic Frontier Foundation, and the enlightened self-interest of participant corporations who may wish to avoid government regulation all provide reasons why privacy certification bodies might someday grow teeth.

A more generic problem with self-regulatory schemes, even those limited to e-commerce or web sites in general, is that they regulate only those motivated or principled enough to take part in them. It may be that competitive pressures might ultimately drive firms to seek privacy certification, but currently fewer than 1000 firms participate in either TRUSTe's or BBBOnline's programs, which suggests that market pressure to participate is weak to nonexistent. Indeed, after several years of calling for self-regulation regarding the collection of data from children, the Federal Trade Commission finally decided to issue extensive regulations controlling online merchants seeking to collect personal information from minors. (280) Even if, as seems to be the case, industry self-regulation is at best marginally effective without legal intervention, and current third-party trust certification bodies have only a very limited influence, it still does not mean that the FTC's response is the only way to proceed.

The United States may be unique in endorsing self-regulation without legal sanctions to incentivize or enforce it; (281) it is hard to believe that the strategy is anything more than a political device to avoid regulation. It does not follow, however, that self-regulation is a bad idea, so long as legal conditions create incentives for parties to engage in it seriously. For example, an enormous amount of energy has gone into crafting "fair information practices." (282)

One way of creating incentives for accurate, if not necessarily ideal, privacy policies would be to use legislation, market forces, and the litigiousness of Americans to create a self-policing (as opposed to self-regulating) system for web-based data collection. If all sites that collect personal data were required to disclose what they collect and what they do with it, if it were an actionable offense to violate a posted privacy policy, and if that private right of action were to carry statutory damages, then users--or class-action counsel--would have an effective incentive to police privacy policies. Indeed, the surreptitious harvesting of music preference data by RealJukeBox motivated two sets of enterprising lawyers to file class action lawsuits. (283) One federal class action suit alleged misrepresentation and violation of the Computer Fraud and Abuse Act. (284) Another class action was filed in California state court under the state's unfair business practices law. Both lawsuits, however, face a problem in valuing the damages. In the federal case, the plaintiffs seek a refund of the thirty dollars that some users paid for the registered version of the software. In the California case, plaintiffs plan to will base damages upon their estimate of the market value of data that RealJukebox collected; they will pick a figure after discovery. (285) Unfortunately for the plaintiffs, there is no reason to believe that even a great deal of music preference data is worth anything near the five hundred dollars per head that their lawyers estimated for the press. The willingness of the federal plaintiffs to sue for only thirty dollars per head suggests that creating a statutory damages remedy, even with only small damages, might create a sufficient incentive to police online privacy policies.

The web, however, is not the only source of concern; other means will be required to address different technologies.

b. PETs and other self-help.

Privacy Enhancing Technologies ("PETs") have been defined as "technical devices organizationally embedded in order to protect personal identity by minimizing or eliminating the collection of data that would identify an individual or, if so desired, a legal person." (286) In addition to PETs embedded in organizations, there are also a number of closely related technologies that people can use for self-help, especially when confronted by organizations that are not privacy-friendly. Such devices can be hardware, such as masks or thick curtains, or software, such as the Platform for Privacy Preferences ("P3P"), which seeks to reduce the transaction cost of determining how much personal data should be surrendered in a given transaction.

PETs and other privacy protection technologies can be integrated in a system design, or they can be a reaction to it. Law can encourage the deployment of PETs, but it can also discourage them, sometimes unintentionally. Some have suggested that the law should require, or at least encourage, the development of PETs. "Government must . . . act in a fashion that assures technological development in a direction favoring privacy protections rather than privacy intrusions." (287) It is a worthy goal and should be part of a comprehensive response to privacy-destroying technologies.

Sometimes overlooked, however, are the ways in which existing law can impose obstacles to PETs. Laws and regulations designed to discourage the spread of cryptography are only the most obvious examples of impediments to privacy-enhancing technology. Legal obstacles to privacy self-help also extend to the lowest technologies, such as antimask laws. In some cases, all PETs may need to flourish is the removal of legal barriers.

Privacy can be engineered into systems design, (288) systems can be built without much thought about privacy, or they can be constructed in ways intentionally designed to destroy it, in order to capture consumer information or create audit trails for security purposes. In each case, after the system is in operation, users may be able to deploy self-help PETs to increase their privacy.

System designers frequently have great flexibility to include privacy protections if they so choose. For example, when designing a road-pricing system, transponders can be connected to a card that records a toll balance and deducts funds as needed. No data identifying the driver or the car is needed, just whether there are sufficient funds. Or, the transponder can instead emit a unique ID code, keyed to a record, that identifies the driver and either checks for sufficient funds or bills her. The first system protects privacy at the expense of a method billing drivers whose cards are depleted. The second system requires billing and can create a huge database of vehicular movements. (289)

In general, designers can organize the system to withhold (or never gather) data about the person, the object of the transaction, the action performed, or even the system itself. (290) Most electronic road-pricing schemes currently deployed identify the vehicle or an attached token.

If privacy has been built into a system, the need for individual self-help may be small, although in this world where software and other high technology is notoriously imperfect, users may have reasons for caution. If PETs are not built into the system, or the user lacks confidence in its implementation, she may engage in self-help. The sort of technology that is likely to be effective depends upon the circumstances and the nature of the threats to privacy. If, for example, a person fears hidden cameras, then a pocket camera detector is just the thing. (291)

For matters involving electronic communications or data storage, encryption is the major PET. (292) Here, however, the United States government has engaged in a long-running effort to retard the spread of consumer cryptography that might be used to protect emails, faxes, stored data, and telephone conversations from eavesdroppers and intruders--ostensibly because these same technologies also enable the targets of investigations to shield their communications from investigators. (293) As a panel of the Ninth Circuit concluded in an opinion subsequently withdrawn for en banc consideration:

The availability and use of secure encryption may offer an opportunity to reclaim some portion of the privacy we have lost. Government efforts to control encryption thus may well implicate not only the First Amendment rights of cryptographers intent on pushing the boundaries of their science, but also the constitutional rights of each of us as potential recipients of encryption's bounty. Viewed from this perspective, the government's efforts to retard progress in cryptography may implicate the Fourth Amendment, as well as the right to speak anonymously, the right against compelled speech, and the right to informational privacy. (294)

Perhaps in fear of another adverse judgment from the Ninth Circuit, the government recently issued substantially liberalized encryption rules that for the first time allow the unrestricted export of cryptographic source code. (295) In a striking demonstration of the effects of a removal of government restrictions on PETs, the new rules emboldened Microsoft, the leading manufacturer of consumer PC operating systems, to pledge to include strong 128-bit encryption in the next release of its software. (296)

The United States' cryptography policy was an intentional effort to block the spread of a technology for reasons of national security or law enforcement convenience. Cryptography is a particularly significant PET because, if properly implemented, the mathematical advantage lies with the defender. Each increase in key length and security imposes a relatively small burden upon the party securing the data, but an exponential computational burden upon any would-be eavesdropper. Unlike so many other technologies, cryptography is relatively inexpensive and accessible to anyone with a computer or a dedicated encryption device. Cryptography is no privacy panacea, however. It is difficult to implement properly, vulnerable to every security weakness in underlying operating systems and software programs, and even at its best, it addresses only communications and records privacy--which, as Part I above demonstrates, is a significant fraction, but only a fraction, of the ways in which technology allows observers to collect information about us.

In other cases, legal obstacles to PETs are either by-products of other policies, or the result of long-standing prohibitions which had consequences in the networked era. For example, the prohibition against "reverse engineering" software--decompiling something to find out what makes it tick--may or may not be economically efficient. (297) But, it makes it nearly impossible for technically sophisticated users to satisfy themselves that programs are cryptographically secure, thus makeing it nearly impossible for them to reassure the rest of us, unless the program's authors release the source code for review.

Rules banning low-technology privacy tools may also need reexamination in light of the reduced privacy in public places. One possible reaction to ubiquitous cameras in public places would be widespread wearing of masks as fashion accessories. Many states, however, have antimask laws on the books, usually enacted as a means of controlling the Ku Klux Klan; some of these statutes are more than one hundred years old. (298) The statutes make it a crime to appear in public in a mask. (299) Judicial opinion appears divided over whether prohibitions against appearing masked in public violate the First Amendment. (300) Regardless of the constitutional issues, it is undeniable that existing antimask laws were enacted before anyone imagined that all urban public spaces might be subject to round-the-clock surveillance. Masks, which were once identified with KKK intimidation, could take on a new and potentially more benign social purpose and connotation; if so, the merits of antimask laws--if they are even constitutional under the right to anonymous speech enunciated in McIntyre v. Ohio Elections Commission (301)--will need rethinking.

2. Using law to change the defaults.

As the dimensions of the technological threat to privacy assumptions gradually have become clearer, academics, privacy commissioners, and technologists have advanced a number of suggestions for legal reforms designed to shift the law's default rule away from formal neutrality regarding data collection. Rather than having transactional data belong jointly and severally to both parties, some proposals would create a traditional property or an intellectual property interest in personal data, which could not be taken by merchants or observers without bargaining. Others propose new privacy torts and crimes, or updating of old ones, to make various kinds of data collection in public or private spaces tortious or even criminal.

While some of these proposals have evident merit, they also have drawbacks.

a. Transactional data-oriented solutions.

Scholars and others have proposed a number of legal reforms, usually based upon either traditional property or intellectual property law, to increase the protection available to personal data by vesting the sole initial right to use it in the data subject. Although current proposals are the product of great ingenuity and thus vary considerably, the common element is a desire to change the default rules in the absence of agreement. Changing the default rule to create a property interest in personal data, even when shared with a merchant, or visible in public, has a number of attractive properties. (302) It also has significant problems, however, both theoretically and practically.

One problem is that any such rule has to be crafted with care to avoid trampling the entire First Amendment. Any rule that makes it an offense to express what one sees or knows (such as who shops in one's store or who slept with whom) strikes dangerously close to core values of free speech. (303) Current doctrine leaves open a space for limited regulation of transactional data along the lines of the Cable Television Act and the Bork Bill. (304) That does not mean such rules are wise or easy to draft. As Professor Kang reminds us: "Consider what would happen if Bill Clinton had sovereign control over every bit of personal information about him. Then the New York Times could not write an editorial using information about Bill Clinton without his approval." (305) No one seriously suggests giving anyone that much control over their personal data, and certainly not to public figures. Rather, property- or intellectual-property-based proposals usually concentrate on transactional data.

From a privacy perspective, the attraction of shifting the default rule is evident. Currently, user ignorance of the privacy consequences of disclosure, the extent of data collection, and the average value of a datum, combined with the relatively high transaction costs of negotiating privacy provisions in consumer transactions governed by standard form clauses, causes privacy issues to drop off the radar in much of routine economic life. Firms interested in capturing and reselling user data have almost no incentive to change this state of affairs. (306) Shifting the default rule to require a data collector to make some sort of agreement with her subject before having a right to reuse her data gives the subject the benefit of notice and of transaction costs.

The transaction cost element is particularly significant, but also potentially misleading. Shifting the default rule means that so long as the transaction costs of making an agreement are high, the right to personal data will not transfer and privacy will be protected. It is a mistake, however, to think that transaction costs are symmetrical. The very structural features of market exchange that make it costly for individuals to negotiate exceptional privacy clauses in today's market make it inexpensive for the author of the standard form clause to award it in order to include a conveyance of the data and a consent to its use. (307)

Whether it is worth the trouble, or even economically efficient, to craft a system that results in people selling their data for a frequent flyer mile or two depends primarily upon whether people are able to value the consequences of disclosure properly and whether contract rules can be changed to prevent the tyranny of the standard form. If not, then the standard form will continue to dominate much of the solution, to the detriment of data privacy; privacy myopia will do the rest.

Ironically, the advances in technology that are reducing the transactions costs of particularized contracting also work to facilitate the sale of personal data, potentially lowering the cost enough to make the purchase worthwhile. If transaction costs really are dropping, it may be more important to craft rules that require separate contracts for data exchange and prevent the data sale from becoming part of a standard form. Such a rule would require not only an option to "opt-in" or "opt-out" as an explicit step in a transaction, if not a wholly separate one, but also would require that failure to convey rights to personal data have no repercussions. But even that may not suffice. Here, the European experience is especially instructive. Despite state-of-the-art data privacy law, people:

routinely and unknowingly contracted away their right to informational self-determination as part and parcel of a business deal, in which the right itself was not even a 'bargaining chip' during negotiations. But, since consent of the data subject had to be sufficient ground to permit information processing if one takes seriously the right to self-determination, such contractual devaluations of data protection were legally valid, and the individual's right to data protection suddenly turned into a toothless paper tiger. (308)

In short, even when faced with European data protection law, the standard form triumphed.

Given that property-law-based solutions are undermined in the marketplace, some European nations have gone further and removed a consumer's freedom to contract away her right to certain classes of data, such as information about race, religion, and political opinions. (309) While likely to be an effective privacy-enhancing solution, this is neither one that corrects market failure in order to let the market reach an efficient outcome, nor one that relies on property rights; it thus eliminates the most common justifications for property-law-based proposals to data privacy. (310)

b. Tort law and other approaches to public data collection.

Tort- and criminal-law-based proposals to enhance data privacy tend to differentiate between data collected in places where one has a reasonable expectation of privacy, such as one's home, and public places where the law usually presumes no such expectation. Some of the more intriguing proposals further differentiate by the means used to collect information, with sense-enhanced collections, especially new ones, being subject to increased regulation.

For example, there are proposals to expand the tort of unreasonable intrusion to include peering into private spaces. Where previously the tort often required the tortfeasor's presence in the private space, (311) the proposal allows the presence requirement to be fulfilled, virtually. (312) A rejuvenated tort of unreasonable intrusion might adapt well to sense-enhanced scanning of the body or the home. It is unlikely to cope as well with data generated in commercial transactions, for the same reasons noted above: transactional data are (at least formally) disclosed with consent. Similarly, privacy torts are unlikely to have much impact on DNA or medical databases since the data are either extracted with consent, or in circumstances, such as arrests, where consent is not an issue.

There is also reason to doubt whether privacy torts can be extended to cover CCTV and other forms of public tracking. Traditionally, privacy torts do not protect things in public view on the theory that such things are, by definition, not private. (313) Expanding them to cover public places would conflict directly with the First Amendment.

Some states have chosen to promote specialized types of privacy through targeted statutes. California's antipaparazzi statute may be a model. (314) It carefully focuses on creating liability for the gathering of information by private persons using sense-enhancing tools. While expanding the zone of privacy in the home, treating one's property line like a wall impermeable to data, the statute does not cover activities on public streets and purposely avoids other First Amendment obstacles.

While the California statute focuses on creating narrow zones of privacy, an alternate approach seeks to regulate access to tools that can undermine privacy. For example, 18 U.S.C. § 2512 prohibits the manufacture, distribution, possession, and advertising of wire, oral, or electronic communication intercepting devices. (315) Perhaps it is time to call for regulation of "snooper's tools," akin to the common law and statutory regulation of "burglar's tools?" (316)

Both of these approaches have potential, although both also have practical limitations in addition to substantial First Amendment constraints. Many privacy-destroying tools have legitimate uses. For example, television cameras, even surveillance cameras, have their place, in banks, for example. Thus blanket rules prohibiting access to the technology are unlikely to be adopted, and would have substantial costs if they were. Rules allowing some uses but not others are likely to be difficult to police. Technology controls will work best if the technology is young and not yet widely deployed; but that is the moment when both knowledge about the technology, and the chance of public outrage and legislative action are minimal. As for the California antipaparazzi statute, it only applies to private collection of sense-enhanced data. It addresses either data collection by law enforcement nor database issues. (317) And, as noted, it does not apply to public spaces.

c. Classic data protection law.

The failure of self-regulation, and the difficulties with market-based approaches, have led regulators in Europe, and to a much lesser extent in the United States, to craft data protection laws. Although European Union laws are perhaps best known for their restrictions on data processing, reuse, or resale of data, the Union's rules, as well as those of various European nations, also contain specific limits on the collection of sensitive types of data. (318) European Union restrictions on data use have an extraterritorial dimension, in that they prohibit the export of data to countries that lack data protection rules comparable to the Union's. (319) These extraterritorial rules do not, however, require that foreign data collection laws meet the Union's standards, leaving the United States on its own to decide what protections, if any, it should enact to safeguard its consumers and citizens.

So far, laws have been few and generally narrow, with the California antipaparrazi statute a typical example. There is one sign, however, that things may be starting to change: What may be the most important United States' experiment with meaningful limits on personal data collection by the private sector is about to begin. Late last year the FTC promulgated detailed rules restricting the collection of data online from children under thirteen without explicit parental consent. These rules are due to come into effect in April, 2000. (320)

III. Is Information Privacy Dead?

In The Transparent Society, futurist David Brin argues that the time for privacy laws passed long before anyone noticed: "[I]t is already far too late to prevent the invasion of cameras and databases. . . . No matter how many laws are passed, it will prove quite impossible to legislate away the new surveillance tools and databases. They are here to stay." (321) Instead, perhaps anticipating smart dust, he suggests that the chief effect of privacy laws will be "to 'make the bugs smaller.'" (322) He is equally pessimistic about technical countermeasures to data acquisition, saying that "the resulting surveillance arms race can hardly favor the 'little guy'. The rich, the powerful, police agencies, and a technologically skilled elite will always have an advantage." (323) Having concluded that privacy as we knew it is impossible, Brin goes on to argue that the critical policy issue becomes whether citizens will have access to the data inevitably enjoyed by elites. Only a policy of maximal shared transparency, one in which all state-created and most privately-created personal data are equally accessible to everyone, can create the liberty and accountability needed for a free society.

Brin's pessimism about the efficacy of privacy laws reflects the law's weak response to the reality of rapidly increasing surveillance by both public and private bodies described in Part I. Current privacy laws in the United States make up at best a thin patchwork, one that is plainly inadequate to meet the challenge of new data acquisition technologies. General international agreements that address the privacy issue are no better. (324) Even the vastly more elaborate privacy laws in Europe and Canada permit almost any consensual collection and resale of personal data. (325) The world leader in the deployment of surveillance cameras, the United Kingdom, has some of the strictest data protection rules in the world, but this has done little or nothing to slow the cameras' spread. What is more, the law often tends to impose barriers to privacy-enhancing technology, or to endorse and require various forms of surveillance: In the words of one Canadian Information and Privacy Commissioner, "the pressures for surveillance are almost irresistible." (326)

Despite the very weak legal protections of informational privacy in the United States today, there is an agreement that Brin's pessimism about the potential for law to control technology and Scott McNealy's defeatism are unfounded, or at least premature. No legal rule is likely to be perfect. Laws are violated all the time. But, making things illegal, or regulating them, does influence outcomes, and sometimes the effort required to achieve those outcomes is worth the cost.

Wiretap statutes are a case in point. It is illegal for the police to wiretap telephone lines without a warrant, and it is illegal for third parties to intercept both landline and cellular calls without the consent of one or both parties to the call. (327) It would be naive in the extreme to suggest that either of these practices completely disappeared as a result of their illegality; it would be equally wrong, though, to suggest that this demonstrates that the laws are ineffective. If wiretapping and telephone eavesdropping were legal, and the tools easily available in every hobby shop, (328) there would be much more wiretapping and eavesdropping.

Even the drug war, which surely stands for the proposition that the law has its limits as a tool of social control in a democracy, also supports the proposition that law can sometimes change behavior. It also reminds us, though, that law alone might not be enough. There are many different kinds of laws, and command and control regulation is often the least effective option. (329)

The contrast between the wiretap laws and the drug war underline another important element in any attempt to use law to reign in personal data collection: Unless there is a mechanism that creates an incentive for someone to police for compliance, legal rules will have at best limited effectiveness. (330) Policing of so-called victimless crimes such as drug usage is hampered by the lack of such incentives. In contrast, the most important policing of wiretap law is conducted by judges, who throw out illegally gathered evidence in the course of reviewing petitions by highly motivated defendants. In other cases, such as statutory damages for falsifying privacy policies, (331) the law can create or reinforce economic incentives for policing compliance.

At least one other contrast shapes and constrains any attempt to craft new legal responses to privacy-destroying technology. As the contrast between Parts I and II of this paper demonstrates, our legal categories for thinking about data collection are the product of a radically different evolution from the technological arms race that produces new ways of capturing information. Privacy-destroying technologies do not line up particularly well with the legal rules that govern them. This explains why the United States Constitution is unlikely to be the source of a great expansion in informational privacy rights. The Constitution does not speak of privacy, much less informational privacy. Even though the Supreme Court has acknowledged that "there is a zone of privacy surrounding every individual," (332) the data contours of that "zone" are murky indeed. The Supreme Court's relatively few discussions of informational privacy tend to be either in dicta or in the context of finding other interests more important, (333) or both. (334) Similarly, familiar constitutional categories such as public forums, limited public forums, and nonpublic forums map poorly on future debates about how to create or protect zones of privacy against privacy-destroying technologies.

The variety of potential uses and users of data frustrate any holistic attempt to protect data privacy. Again, constitutional doctrine is illustrative. Whatever right to informational privacy may exist today, it is only a right against governmentally sponsored invasions of privacy only--it does not reach private conduct. (335) Thus, even if the courts were to find in the federal Constitution a more robust informational privacy right, it would address only a portion of the problem. (336)

Rules about data acquisition, retention, and use that might work for nosy neighbors, merchants, or credit bureaus might not be appropriate when applied to intelligence agencies. Conversely, governments may have access to information or technology that the private sector lacks today but might obtain tomorrow; rules that focus too narrowly on specific uses or users are doomed to lag behind technology. Restricting one's scope (as I have in this article) to data acquisition, and leaving aside the important issues of data retention and reuse, may make the problem more manageable, but even so it remains dauntingly complex because the regulation of a single technology tends to be framed in different ways depending upon the context. Sense-enhanced searches, for example, tend to be treated as Fourth Amendment issues when conducted by the government. If the intruder is private, the Fourth Amendment is irrelevant. Instead, one might have to consider whether her actions constitute an invasive tort of some type (or perhaps even a misappropriation of information), who owns the information, and whether a proposed rule limiting the acquisition or publication of the information might run afoul of the First Amendment. (337)

That said, technological change has not yet moved so far or so quickly as to make legal approaches to privacy protection irrelevant. There is much the law can do, only a little of which has yet been tried. Many of the suggestions outlined above are piecemeal, preliminary, or incremental. At best they form only part of a more general strategy, which will also focus on encouraging the adoption of fair information practices and the regulation of data use once it has been collected. Whenever the law can address the issue of data collection itself, however, it reduces the pressure on data protection law and contributes greatly to data privacy protection; the converse is also true: Rules about data retention and use will shape what is collected and how it is done. (338)

There is no magic bullet, no panacea. If the privacy pessimists are to be proved wrong, the great diversity of new privacy-destroying technologies will have to be met with a legal and social response that is at least as subtle and multifaceted as the technological challenge. Given the rapid pace at which privacy-destroying technologies are being invented and deployed, a legal response must come soon, or it will indeed be too late.