February 7, 2022
Welcome to Decrypting a Defense, the monthly newsletter of the Legal Aid Society’s Digital Forensics Unit. This month, in recognition of Black History Month, we highlight the surveillance of anti-racist groups by law enforcement and the use of predictive policing algorithms. Diane Akerman examines the myth of the objective algorithm. Benjamin Burger discusses surveillance of anti-racist groups, and the use of screenshots as evidence. Finally, Brandon Reim answers a question about virtual private networks.
The Digital Forensics Unit of the Legal Aid Society was created in 2013 in recognition of the growing use of digital evidence in the criminal legal system. Consisting of attorneys and forensic analysts and examiners, the Unit provides support and analysis to the Criminal, Juvenile Rights, and Civil Practices of the Legal Aid Society.
In the News
The Myth of the Objective Algorithm
Diane Akerman, Digital Forensics Staff Attorney
“Accused not of crimes they have committed, but of crimes they will commit.”
When Philip K. Dick wrote The Minority Report in 1956, it was considered science-fiction. In 2022, his story about predictive policing and pre-crime seems less fiction and more headline.
Consider the story of Robert McDaniel, a resident of Chicago, put on the Chicago Police Department’s (CPD) “heat list” based on predictive policing software. The algorithm deployed could only say that McDaniel was “more likely” to be involved in a shooting but did not specify whether he would be the victim or perpetrator.
Nonetheless, armed with nothing more than the computations of a computer program, CPD surveilled Mr. McDaniel extensively. Tragically, Mr. McDaniel was later the victim of two non-fatal shootings – targeted because constant police surveillance gave the impression that he was actually an informant or cooperating with police to surveil his neighborhood. Ultimately, the program was trading in self-fulfilling prophecies, rather than harm-reduction.
Predictive Policing tools suffer from the same hazards that critics of risk assessment tools have long decried. By using historical crime data – data amassed through years of racist policing and law enforcement – they are simply reinforcing and justifying continued racist policing tactics. A Gizmodo report found that PredPol, for example, overwhelming targeted Low-Income, Black, and Latino Neighborhoods. CompStat, developed in New York City as a “data-driven approach to policing” was in part responsible for the NYPD’s decades of harassment, particularly of young Black and Latino men under stop-and-frisk.
These purportedly objective, data-driven tools affect people at all level of involvement in the criminal legal system – even those like McDaniel, who at the time of his targeting, was not suspected or accused of any crime. Over the years, these systems have evolved from targeting neighborhoods and groups, to targeting individuals. Risk assessment tools in bail determinations replicate biases in the criminal legal system when making supposedly individualized recommendations. The Justice Department’s risk assessment tool used for parolees, Pattern, “overpredicted the risk that many Black, Hispanic, and Asian people would commit new crimes of violate rules after leaving prison.”
Proponents of these tools rely on the myth of algorithmic objectivity, an idea that is demonstrably flawed and discredited. What’s next? Artificial Intelligence based prosecutions?
Anti-Racist Groups Monitored By Police
Benjamin Burger, Digital Forensics Staff Attorney
Surveillance of progressive activists by law enforcement is not a new phenomenon. However, the Southern Poverty Law Center recently detailed how the Washington D.C. Metropolitan Police Department’s Intelligence Bureau surveilled Black-led organizations before the George Floyd protests in 2020, and as far back as May 2011. According to emails obtained from Distributed Denial of Secrets, a transparency collective, the MPD issued numerous security bulletins focused on the D.C. Black Lives Matter affiliate. MPD also tracked D.C. BLM’s via social media, monitoring Facebook, Instagram, and Twitter. In one notable example, MPD used disinformation published on a right-wing website as a basis for passing on information to the Secret Service and looking for a connections between D.C. BLM and threats to law enforcement.
In New York City, the NYPD’s Intelligence Division has played a similar role, investigating anti-racist and progressive organization. Nine years ago, a prominent politician even called for an audit of the division, noting that “the serious allegations and evidence of wrongdoing charged against the Division, which is the basis of three federal lawsuits concerning the Intelligence Division's unconstitutional targeting of civil rights groups, leftist organizations, and religious groups.” However, two successive city Comptrollers - John Liu and Scott Stringer - failed to audit the Intelligence Division despite promises to the contrary. Investigations into the Intelligence Division have shown that political surveillance has been ineffective and failed to prevent a rise in crime. Civilian leadership - in New York City, Washington D.C., and across the country - needs to reassert oversight of these intelligence units as recommended by that politician almost a decade ago. That lawmaker? New York City Mayor Eric Adams.
In the Courts
Screenshots and the Best Evidence Rule
Benjamin Burger, Digital Forensics Staff Attorney
The best evidence rule is one of those law school concepts that quickly leaves the brain upon the conclusion of the bar exam. Most attorneys never have to grapple with the rule or its numerous exceptions. Codified in Federal Rule of Evidence 1002, the best evidence rule states that, “[a]n original writing, recording, or photograph is required in order to prove its content unless these rules or a federal statute provides otherwise.” See FRE 1002. In New York, where evidentiary law is not codified, the Court of Appeals has stated that “[t]he best evidence rule requires the production of an original writing where its contents are in dispute and sought to be proven[.]” See People v. Haggerty, 23 N.Y.3d 871, 876 (2014). Exceptions to the best evidence rule include copies of business records, scanned documents converted to electronic form, and specific certified copies of records, like government documents or diplomas. Perhaps the most widely known exception to the best evidence rule is when the original document or recording is destroyed, cannot be obtained through judicial process, or a party refuses to produce the original. In these cases, secondary evidence may be admitted to prove the content of the original.
The best evidence rule was formulated at a time when copies of original documents were subject to inaccuracies due to hand copying. Although modern copying has become reliable and accurate, technology has similarly made it easier to change documents, recordings, or videos. Sometimes these changes are so seamless as to be unrecognizable as inauthentic. Unsurprisingly, courts have begun to grapple with issues surrounding technology and the best evidence rule. Last year, a federal district court held that the best evidence rule did not allow for screenshots of messages to be used to show that Facebook Messages had been sent between two individuals. See Edwards v. Junior State of Am. Found., 2021 WL 1600282, at *7 (E.D. Tex. Apr. 23, 2021). As the court noted, “one need not be familiar with the Best Evidence Rule to understand that the actual Messages may be important in proving that someone sent the Messages in question and that screenshots may be insufficient to that end.” Id.
As part of the Texas case, a computer forensics expert testified that it was “easy to create fake Facebook conversations using online tools, however it would be very difficult to fabricate a conversation on the Facebook platform itself.” Id. at 4. In fact, digital forensics experts have previously written about the ease in which people can alter or create fake message screenshots. Courts have been inclined to take a flexible approach to admitting screenshots into evidence. See People v. Price, 29 N.Y.3d 472, 481 (2017) (Rivera, J., concurring) (“We have long recognized that authentication is not subject to a one-size-fits all approach but, rather, the proof necessary to establish the reliability of the proposed evidence may differ according to the nature of the evidence sought to be admitted”) (internal quotations omitted). However, as screenshots and other reproductions of digital content are more frequently used at hearings and trials, the courts will need to adopt an approach that protects against fakes and other forms of altered evidence.
Ask an Examiner
Do you have a question about digital forensics or electronic surveillance? Please send it to AskDFU@legal-aid.org and we may feature it in an upcoming issue of our newsletter. No identifying information will be used without your permission.
Q. The prosecutor in my case alleges that my client used a VPN. What is a VPN and what does it have to do with the internet?
A. “VPN” stands for “Virtual Private Network.” Typically, a user connects directly to the internet through their personal computer, phone, or tablet. However, when a person uses a VPN, there is an extra step in the connection process. First, you connect to a VPN service, which provides you with a new Internet Protocol (IP) address. Second, using that IP address, you access the internet through an encrypted network (sometimes referred to as a “tunnel”) that is provided by the VPN service. Most VPN services allow you to choose the country or city through which you are connecting to the internet. For example, you can connect through London when you are physically present in New York City.
Using a reliable trustworthy VPN provides better privacy and security when web browsing. Connecting to the internet through an encrypted server means that connections over public Wi-Fi networks, like at Starbucks, are more secure. Usually, public networks can “see” everything your device sends across them, as the traffic is unencrypted. However, when connecting through a VPN, the data is encrypted and cannot be seen by someone looking at network data. VPNs can also be used to hide the websites a user visits. Most internet service providers, like AT&T or Verizon, track the websites you visit and sell information about search and browsing habits to advertisers. A VPN prevents the ISP from collecting data about your internet usage. Another popular use for VPNs is to watch content on a streaming platform that is locked to another location. For example, due to licensing agreements, the programs on Netflix vary by country and region. By changing your IP address to another region, you may be able to access content unavailable in the United States.
VPNs can also be used to commit internet-based crimes anonymously. The IP address assigned by the VPN makes it difficult for law enforcement to track where that IP address originated. However, it is not impossible. Some VPNs have a “logging” feature which records when a person uses their service. Some even log everything a user does while connected. They will also record the originating IP address of the user. As a result, even though a user’s VPN assigned IP address is anonymous, the VPN may still identify a person’s original IP address and internet activity and report it to law enforcement.
Although VPNs provide added privacy and security to internet browsing. Their purpose is to prevent ISPs and other business from commodifying internet activity, not enabling computer crimes.
- Brandon Reim, Digital Forensics Analyst
Upcoming Events
February 8, 2022
S.T.O.P. x RadTech: Sex Work, Surveillance, & Resistance (Virtual)
February 15, 2022
The Risks of Bias in Artificial Intelligence (NYCBA) (Virtual)
February 16, 2022
What Does Law Enforcement Need to Know about Precision Policing 2.0? (Justice Clearinghouse) (Virtual)
March 5-13, 2022
NYC Open Data Week (NYC Mayor’s Office of Data Analytics, BetaNYC, and Data Through Design) (Virtual & in-person)
March 7-10, 2022
Mozilla Festival (MozFest 2022) (Virtual)
March 29, 2022
Intro To Artificial Intelligence (AI) Part 1: AI As Evidence In Litigation (NYSBA) (Virtual)
April 5, 2022
Intro To Artificial Intelligence (AI) Part 2: AI As A Litigation Tool (NYSBA) (Virtual)
April 7, 2022
SANS Open-Source Intelligence Summit 2022 (Virtual)
April 7-9, 2022
NACDL Making Sense of Science: Forensic Science & the Law Seminar (Las Vegas, NV)
April 11-13, 2022
Magnet User Summit (Nashville, TN)
May 9-12, 2022
Techno Security & Digital Forensics Conference (Myrtle Beach, SC)
July 22-24, 2022
A New HOPE (Hackers on Planet Earth) (Queens, NY)
August 11-14, 2022
DEF CON 30 (Las Vegas, NV)
October 10-12, 2022
Techno Security & Digital Forensics Conference (San Diego, CA)
Small Bytes
Boston Police Department Used Forfeiture Funds to Hide Purchase Of Surveillance Tech From City Reps (Techdirt)
Feds’ spending on facial recognition tech expands, despite privacy concerns (CyberScoop)
A Tesla on autopilot killed two people in Gardena. Is the driver guilty of manslaughter? (Los Angeles Times)
Sedition Prosecution Of Oath Keepers Members Shows The FBI Can Still Work Around Encryption (Techdirt)
Can People Tell When Blocked on Instagram, Messenger, Twitter (Consumer Reports)
Police Outsourcing Human Interaction With Homeless People to Boston Dynamics’ Robot Dog (Vice)
Adams must not drag us toward tech dystopia (New York Daily News)
The Battle for the World’s Most Powerful Cyberweapon (New York Times)