As a hostage and their kidnapper physically struggle, both desperately trying to pry away a loaded firearm from the other, a police sniper takes the shot. The entire world pauses while looking to their smart phones for an update on this ongoing hostage crisis, and then moves on just as quickly. This surreal climax of the Black Mirror episode “Smitherines” brilliantly sets the visceral stakes of the ongoing transactions between Big Tech and Big Brother. But as Professor Yan Fang uncovered, this arms-length partnership is not as seamless as Big Tech pessimists might believe. There are what she calls “knowledge misalignments” between these two institutions that complicate the picture of the next generation of law enforcement, investigations, and individual privacy rights.
In Internet Technology Companies as Evidence Intermediaries, Fang discusses the reality that tech companies have become evidence intermediaries. This is fictionally illustrated in “Smitherines,” where the kidnapping of a social media company’s employee leads to an unlikely partnership between the social media company and law enforcement as they both try to uncover as much information as possible about the kidnapper. One of eerie takeaways from this on-screen partnership is that the social media company is able to access and leverage far more information about the kidnapper from his social media profiles than police detectives with years of experience. This striking commentary illustrates the real-world truth that tech companies are custodians of petabytes of consumer information that billions of people around the world freely share on their platforms. Thus, Fang describes that when law enforcement agencies (LEAs) seek information about these consumers for a variety of investigatory purposes, tech companies serve as the intermediary between LEAs and the trove of evidence they seek.
While Black Mirror specializes in drama and hyperbole, the stakes are indeed high. To take one example, LEAs have used genetic information from 23andMe to solve cold-cases with DNA evidence. Privacy experts also point out that LEAs may seek information from apps that women use to track their menstrual cycle to track pregnancies and potential abortions in a post-Roe v. Wade world in places where abortions are criminalized.
In a growing law and tech field that can ironically feel stagnant by canvasing the same issues or proposing the same solutions, Fang’s piece shows innovation in its methodology and scholarly contributions. Over the course of several years, Fang engaged in a qualitative research study, interviewing dozens of employees from tech companies and LEAs to learn how their exchange of information works on the ground. Breaking free of the ivory tower and going where the rubber hits the road uncovered some surprising takeaways from the people who actually have power over our most intimate information.
Fang’s contribution to this literature identifies just how much gets lost in translation between Big Tech and LEAs, which she terms as “knowledge misalignment.” She uses this term to summarize her findings that both parties in the exchange of information often do not have adequate knowledge of the other’s policies, procedures, or purposes. For example, when an LEA is seeking specific and targeted information, they often do not have adequate knowledge about how to request such information from the tech company. Perhaps the LEA will ask about a person’s “buddy list,” but this vernacular may not correspond with how the tech company keeps information. The tech company might allow users to create a “friends list,” or they have a “buddy list” and a “friends list,” or they have neither. Another problem might arise if an LEA blindly requests “all relevant information.” That might be overbroad, and also might very well overwhelm the LEA, providing a haystack when it is really looking for a needle.
On the other side of the knowledge misalignment relationship, a tech employee who is tasked with responding to an LEA request for information will almost never have knowledge of the LEA’s investigatory purpose, and this may limit the employee’s ability to adequately respond. If an LEA asks for a consumer’s “session time,” a tech employee might be left wondering what type of information the LEA really wants. Do they want a record of when a consumer logged in and logged out of the company’s website? Or do they really want a record of when the consumer rented or sold something on the website? Without more knowledge of the LEA’s investigatory purpose (such as investigating a person renting property to facilitate production of methamphetamine, or communicating inappropriately with underage children), tech employees are making their best guess in a good-faith effort to be responsive to the LEA’s lawful request.
The findings I found most interesting from Fang’s interviews are those that are intertwined with my normative suspicions of the Big Tech – Big Brother relationship. On the tech side, employees were worried about not providing responsive information to LEAs because it might increase their workload. If they gave the LEAs unresponsive information, this could lead to the LEAs following up for more information, even though Fang found that such follow-ups were rare. This perspective of tech employees surprised me; perhaps in my naivety, I expected for tech companies to be champions of safeguarding consumers’ information from government intrusion. Instead, Fang’s study left me with a sense that these employees—while well-intentioned—are struggling to find the right balance between multiple stakeholders: the consumers that trust them with their data, LEAs who legitimately need information for many important purposes, and their co-workers who must manage their workflow. Fang also notes that some tech companies have developed their own procedures for streamlining this process to decrease transaction costs. Not every tech company is going to litigate their consumers’ privacy rights to protect against government encroachment. Perhaps Fang’s work has made me finally realize that tech companies’ promises of privacy are a farce in this new age of human flourishing.
This dovetails with Fang’s findings from LEA personnel interviews. LEAs reported encountering both cooperative and adversarial attitudes from tech companies. Many found tech companies to be incredibly helpful, going the extra mile to help an LEA officer obtain the information they sought; others reported that tech companies could provide arbitrary responses that would not produce helpful information. As somebody who believes strongly that the government must meet a high burden to infringe on individual liberties, I am suspicious of private tech companies helping LEAs meet their burden. Unfortunately, it appears the dynamic works in the opposite way for some tech companies; they see it as a burden to constantly be pestered by LEAs, so they have tried to make the process easier for everyone. This makes sense from the perspective of a private company that seeks to reduce their transaction costs, but it leads to a perverse result when they are further empowering the government to peer into somebody’s private life for the purposes of criminal or civil prosecution. While the normative balance between privacy and security is not within the scope of Fang’s study, she nevertheless contributes to this age-old discussion.
Fang’s study also uncovers an underappreciated truth: LEAs often acquiesce to the practices and expertise of tech companies. This practical realization flips the power dynamic of public vs. private on its head. Because the tech companies have what the LEAs want, they are not merely evidentiary intermediaries, but gatekeepers. Tech companies have possession of consumer information and are experts in how to access, collect, and interpret it. With this power comes great responsibility. If tech companies were to take an adversarial stance to protect their consumers’ personal information, LEAs would seldom have the time or the resources to fight for this information; but as tech companies consistently take a more cooperative approach, they give LEAs access to information that it would have otherwise taken countless hours of multiple trained investigators to uncover in previous generations. Tech companies need not acquiesce to LEAs; they have the power and responsibility to serve as a check against government overreach.
Fang addresses this overall approach of treating tech companies as protectors of privacy and gives some reason for caution. First, there may be incidents when tech companies help LEAs in a way that can lead to less intrusive information being shared. When tech employees work with LEAs to standardize or otherwise translate their requests, this often results in smaller amounts of information being shared, and prevents the sharing of large amounts of unresponsive information. Second, she correctly cautions us against putting too much trust into tech companies as the chivalrous protectors of our data. Tech companies will always seek more information, because information is money; the government will always seek more information because information is power; and if Big Tech and Big Brother are both operating according to these incentives, perhaps the responsibility lies with the individual to protect themselves. But this option seems less and less likely as we rely more on Big Tech to make our daily lives easier.
As we peer into the black mirror of our phones, tablets, and screens, what do we see? Fang’s work uncovers the nuances of a growing new relationship between Big Tech and Big Brother. Currently, this relationship suffers from knowledge misalignments, but both institutions seek to iron out much of these asymmetries. For those of us down the barrel of an LEA sniper for right or for wrong, the stakes are life and death. And for those others who will look at their phones for an update and move on with their life, so too are the stakes high for how this relationship impacts their life and their death.






