28 June 2012

Filtering

The 55 page 'Can a Computer Intercept Your Email?' (Marquette Law School Legal Studies Paper No. 12-05, forthcoming in Cardozo Law Review) by Bruce Boyden looks at the legality of US email filtering schemes.

Boyden comments that
 In recent years it has become feasible for computers to rapidly scan the contents of large amounts of communications traffic to identify certain characteristics of those messages: that they are spam, contain malware, discuss various products or services, are written in a particular dialect, contain copyright-infringing files, or discuss symptoms of particular diseases. There is a wide variety of potential uses for this technology, such as research, filtering, or advertising. But the legal status of automated processing, if it is done without advance consent, is unclear. Where it results in the disclosure of the contents of a message to others, that clearly violates the federal law governing communications privacy, the Electronic Communications Privacy Act (ECPA). But what if no record of the contents of the communication is ever made? Does it violate communications privacy simply to have a computer scan emails? 
I argue that automated processing that leaves no record of the contents of a communication does not violate the ECPA, because it does not “intercept” that communication within the meaning of the Act. The history, purpose, and judicial interpretation of the ECPA all support this reading: interception requires at least the potential for human awareness of the contents. Furthermore, this is not simply an accident of drafting, an omission due to the limited foresight of legislators. Under most theories of privacy, automated processing does not harm privacy. Automated processing may in some cases lead to harm, but those harms are not, in fact, privacy harms, and should be analyzed instead under other legal regimes better adapted to dealing with such issues.
He concludes that
a communications privacy statute protects against one particular form of harm, one that is relatively easy to identify but difficult to measure: the loss that everyone using a particular mode of communication will experience if using that mode results in a loss of privacy. xxx The difficulty is that “privacy” is a capacious concept. Without clear boundaries, there is a danger that a “privacy harm” justifying he invocation of a communications privacy statute could be defined as simply any negative consequence that results from the use of pri- vate, personal information in transit, regardless of its effect on status, reputation, control, or autonomy. Such a definition would have the advantage of making the rule of liability less dependent on context. But it is far too broad, as it would sweep in much activity that is not a privacy violation, as opposed to frustration of some other goal. There are other legal regimes to provide redress for other sorts of harms, but they may more difficult to invoke - they may require proof of actual harm, or objective unreasonableness, or emotional distress, or state action. It may therefore be tempting to take advantage of the nebulousness of the concept of “privacy” by classifying harms resulting from handling of communications as privacy harms, giving rise to a claim under the present or a future amended Wiretap Act. 
But that would be a mistake. Using the unilinear penalties of the Wiretap Act to address highly contextualized harms would be like using a sledgehammer to repair a filigree. Consider a colorful example to illustrate the point: suppose someone sends an email with an attachment and the ISP scans the content for malware. However, in the process and as a result of the scan, the ISP’s email server explodes and all data stored on it is lost, including many of the sender’s emails. The loss those emails is certainly detrimental to the sender, and it resulted from a use of the content of his or her communication. The proper rule for analyzing liability in such a situation is negligence, breach of contract, or product liability. The loss of the emails might fairly be said to be a harm resulting from failure to take proper precautions, or failure to live up to a promise, or manufactur- ing a defective product. But it is not a privacy harm. ... 
to the extent there are other goals impeded by some automated processing of the contents of communications, other legal regulatory schemes are better disposed to achieve those goals. Competitive harms are governed by trademark law, unfair competition law, antitrust law, and advertising law. The Due Process Clauses of the Fifth and Fourteenth Amendments have a large body of doctrine associated with them to adjudicate what constitutes fair procedures. The Wiretap Act’s core competencies lie elsewhere. The Act protects the privacy of communications - the penalties attach to interception, with only limited categorical exceptions, not measured according to the use or potential harm that results. The intrusion itself is the harm the Act prevents. The few instances in which the Wiretap Act requires an examination of the context of an interception - the consent exception, or the ordinary course of business exception, for example - are among the most problematic and most administratively difficult provisions in the Act to apply. Importing contextual determinations into a communications privacy statute reduces the effectiveness of the statute. Using the Wiretap Act as a more general privacy regulation is problematic because the nature of privacy is too amorphous to serve as the clear trigger for liability a communications privacy statute requires. 
We are only now at the advent of the use of computers to assist with tasks that previously were the sole province of human judgement. This development is one that holds considerable promise for assisting humans in coping with some of the consequences of the digital age, namely the flood of information of that has resulted from the increased capacity to collect, store, copy, and transmit data. Automatic processing can help by categorizing, filtering, routing, or identifying patterns in that data and taking appropriate actions, without the need for human input. 
Such automated processing does not pose any threat to privacy. Although there is a tendency to anthropomorphize computers, just like we anthropomorphize cars and toasters, a computer scanning an email is the functional equivalent of a thermostat turning on the heat. A thermostat is not a surveillance device; it does not monitor a house and make a decision about what temperature the house should be at. It mechanically triggers a switch according to its programming. Automated processing of communications is similar. There is therefore no need to erect a legal barrier to such processing in order to protect privacy, and the current Wiretap Act does not impose one. The Act has always required at least the prospect of human review, and not only because it was initially drafted in 1968. Rather, it is because, as the drafters of the ECPA in 1986 understood, computer monitoring is qualitatively different from human monitoring. It is the threat of human use of personal information that reduces privacy, and not simply that one’s information may be used in some way. 
It is too soon to tell exactly how much value there will be in automatically scanning and processing communications in situations that do not fall within an exception to the Wiretap Act - where prior consent cannot be obtained, and where the purpose is something other than operating or maintaining a computer network. But it appears likely that at least some useful applications would be impeded. For example, environmental controls based on detecting whether there is conversation or other sounds within a given room would require obtrusive notices to be placed around the room to ensure implied consent, perhaps detracting from the room’s aesthetics and perhaps leading to some uncertainty as to whether all users of the room will see them. There is no need to bear those costs, however, in the name of privacy.