Policing Technology, Amazon Surveillance Promises, Tech Standards, Accrediting Digital Forensics & More
Vol. 5, Issue 2
February 5, 2024
Welcome to Decrypting a Defense, the monthly newsletter of the Legal Aid Society’s Digital Forensics Unit. Benjamin Burger discusses the lack of regulation governing police use of technology. Diane Akerman examines Amazon Rekognition’s about-face. Shane Ferro reviews attempts at forensic technology standards. Finally, guest columnist Brian Cummings explains why accreditation and oversight are important parts of digital forensics.
The Digital Forensics Unit of The Legal Aid Society was created in 2013 in recognition of the growing use of digital evidence in the criminal legal system. Consisting of attorneys and forensic analysts, the Unit provides support and analysis to the Criminal, Juvenile Rights, and Civil Practices of The Legal Aid Society.
In the News
Policing and the Responsible Use of Technology
Benjamin Burger, Digital Forensics Staff Attorney
Wired recently published an alarming story concerning the using of DNA phenotyping and facial recognition. Police officers in California sent DNA samples collected at a crime scene from an unknown person to Parabon NanoLabs. Parabon produced a “Snapshot Phenotype Report,” which was a 3-D rendering of a face supposedly based on the DNA sample provided by the police. That 3-D rendered “face” was then placed in a facial recognition system in an attempt to identify the “suspect.”
There are significant concerns about using a computer generated image - allegedly based on genetic attributes - in a facial recognition system. First, there is no scientific evidence to believe that using DNA to render a “face” is even possible. Parabon at least recognizes this fact as it prohibits the use of its rendered faces in facial recognition systems. Second, the possibility of misidentification is clearly heightened in a situation where a computer generated face is being used to identify a suspect. Again, there is no evidence that genetic attributes can actually be used to generate an image of a face. Using an “incorrect” image as an input in an facial recognition system means that any output will be tainted by the initial error. This is also known as “garbage in, garbage out.”
This story raises another issue that we repeatedly seen in New York City and across the country. There is very little, if any, national or local regulation of law enforcement’s use of technology. In New York City, the New York City Police Department vehemently opposed the Police Oversight and Surveillance Technology Act, which merely required the agency to draft use policies for surveillance technology. Otherwise, the agency has been left to its own devices for implementing technology, which has led to frightening and ridiculous uses. Some non-profits have proposed regulations and policies that would govern police use of technology, but in many instances, these regulatory attempts come after the police have purchased and deployed novel surveillance tools.
One area that could be effective means of regulating law enforcement is in contracting and procurement. Instead of creating policies that govern police use of technology after the fact, it would be more effective to limit law enforcement purchase of these technologies. Local and state governments have before them numerous bills that would limit the ability of law enforcement to purchase or use surveillance technology. Instead of admonishing police for using technology in violation of the terms of service or without a scientific basis, policymakers could adopt rules that require law enforcement to identify their needs for a specific technology, adopt use policies prior to purchase, and condition continuing use or licenses on abiding by those policies. Oversight of law enforcement is always a politically fraught subject, but we do not have the luxury of patience when it comes to police use of powerful technologies like facial recognition and probabilistic genotyping.
Amazon Giveth and Amazon Taketh Away and then Amazon Giveth Again
Diane Akerman, Digital Forensics Staff Attorney
In the wake of the 2020 George Floyd Protests, Amazon announced a one year pause on the sale of its facial recognition technology - Rekognition - to law enforcement. A year later, in May 2021, the tech giant extended the moratorium indefinitely. Later that year, Amazon also implemented over 100 changes to its products in response to a civil rights and civil liberties audit of Ring by the NYU School of Law Policing Project. One such change was the addition of a new feature called "Request For Assistance," which required local law enforcement agencies to publicly post all requests for videos in a public forum.
It's hard to praise moves made by Amazon, even when they purport to protect privacy rights. After all, Amazon has created a surveillance behemoth of cheap and accessible consumer surveillance tools, complete with partnerships with law enforcement agencies across the country. Which brings us to now. First, Amazon suddenly announced they would be sunsetting the Request for Assistance program, effectively forcing law enforcement to, you know, follow the law and get a warrant to obtain video from Ring. But we barely had the 24-hour news cycle to digest this inexplicable move from Amazon before learning that the FBI is in the initiation phase of using Amazon Rekognition.
An AI inventory released on the DOJ website discloses that the FBI has a project named “Amazon Rekognition – AWS – Project Tyr.” The description does not mention the term “facial recognition” but states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.”
Remember, Rekognition is the program that falsely matched 28 members of Congress with mugshots, disproportionately making false matches for people of color.
Bizarrely, Amazon is denying that they have in any way relaxed the previous moratorium on the use of Rekognition’s face comparison feature in connection with criminal investigations in June 2020.
To suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.
What an obfuscating and enlightening sentence, the kind that can only be written by lawyers.
While we don't know much about what FBI is doing with Rekognition, or the real reason Amazon decided to phase out Request for Assistance, it's a good reminder to be wary of big tech. With hands in nearly every corner of the market, Amazon's decisions have enormous effects on individual's digital privacy - whether by supplying the product itself, giving law enforcement access to consumer data, or providing the products directly to law enforcement. It's easy to applaud Big Tech when they make (clearly PR-driven) decisions to protect their consumer's privacy, but when decisions are based on what makes a brand more likeable (i.e., profitable) they are rarely permanent.
Policy Corner
A Step Toward Bare Minimum National Reliability Standards for Forensic Technology
Shane Ferro, Digital Forensics Staff Attorney
While bills to regulate the use of biometrics in both criminal and commercial contexts may be stalled in New York, there has been a flurry of activity recently on the federal level. In 2022, President Biden issued Executive Order 14074: Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety. As part of this EO, both the executive branch and Congress have undertaken studies and hearings related to the use of AI and algorithmic technology in the criminal legal system.
In mid-January, the National Academies of Sciences, Engineering, and Medicine released a report on the use of facial recognition technology. The report fell short of advocating for an outright ban, but recommended that Congress pass national privacy legislation and recommended limiting or banning the technology’s use in some situations, including for mass or individual surveillance and access to housing. The report recognizes that facial recognition poses a substantial threat to privacy and civil liberties, including in the criminal context.
Last week, a group of seven Senators sent a letter to the Department of Justice, telling the agency to, as Wired put it, “quit blindly funding ‘predictive’ police tools.” The letter is part of a now years-long back-and-forth between Congress and the DOJ about the Department funding predictive policing tools that are shown to be racist and discriminatory, which violates Title IV of the 1964 Civil Rights Act. The letter demands that DOJ do basic things like establish evidentiary standards for potential discriminatory impacts of the predictive policing systems, reject grants for systems that don’t meet those standards, and maintain a “complete and public record” of DOJ grants for predictive policing systems. Currently, the DOJ doesn’t even know how many grants it has made for these kinds of systems, according to the letter.
The same day, the Senate Judiciary Subcommittee on Criminal Justice and Counterterrorism conducted a hearing about the use of AI in criminal investigations. Armando Aguilar, Miami’s Assistant Chief of Police, Karen Howard, the Government Accountability Office’s head of Science and Technology Assessment, and Rebecca Wexler, an assistant professor of law, Co-Director of the Berkeley Center for Law and Technology, and former Yale Public Interest Fellow at The Legal Aid Society, testified under the scrutiny of Cory Booker, Tom Cotton, and others.
One of the threads that came out of the hearing, especially from Wexler and Howard, was the fundamental unreliability of a lot of these “AI” technologies, including facial recognition. There are no national standards, regulations, or requirements for transparency. Different types of software that on the surface purport to do the same thing have wildly different performance outcomes, and every local police department and/or prosecutor’s office is left to make their own determination on technology procurement. In the criminal context, even if there were standards of reliability, most companies that create these technologies and the law enforcement groups that use them are highly resistant to opening up the guts of the software for the defense to inspect and test how well the technology works versus the marketing claims made publicly.
Ultimately, the thread tying all of this together is an increasing understanding on the federal level that we are in a chaotic era of new policing and surveillance technologies. At least a few people in power seem to believe that there should be national standards and regulations governing the use of these technologies—however, we’re still quite far from that being a reality.
In the meantime, New York attorneys should keep screaming about the defense’s right to “discover, inspect, copy, photograph and test, all items and information that relate to the subject matter of the case,” including new technologies that the government tells us we don’t have the right to know about.
Expert Opinions
This month’s newsletter includes our new column: Expert Opinions. We’ve invited specialists in digital forensics, surveillance, and technology to share their thoughts on current trends and legal issues. Our first guest columnist is Brian Cummings.
Digital Forensics Oversight and Accreditation: Undefined
A recent study, funded by the National Institute of Justice, found that forensic errors in wrongful convictions were frequently associated with incompetent or fraudulent examiners, disciplines with an inadequate scientific foundation, and organizational deficiencies. Digital evidence, though barely mentioned in the study, is equally vulnerable to these problems.
As a relatively new and rapidly evolving field, digital forensics should arguably receive even greater scrutiny than traditional forensic disciplines. Instead, “[t]he status quo reveals a troubling scenario of governments’ lack of full participation, lack of proper certification bodies, and oversight.” (Digital Forensics, A Need for Credentials and Standards) At the federal level, when the Department of Justice announced that “all department attorneys must use, whenever practicable, accredited forensic testing entities” by 2020, they specifically exempted digital forensics. Similarly, at the state level, where policies governing forensic labs are inconsistent or non-existent, digital evidence often evades mandatory oversight. In New York, the Computer Crimes Unit of the State Police “is not part of the NYSP Crime Laboratory, and as such, [does] not fall within the Crime Laboratory's ASCLD/LAB Scope of Accreditation,” and in Texas, digital forensics is statutorily exempt from accreditation requirements.
Digital evidence can be complex, making it difficult to know when information is missing or misrepresented. It can also be incredibly important, sometimes even dispositive, carrying the same air of credibility as evidence presented by DNA analysts, chemists, and medical examiners, without the same degree of scrutiny.
Mobile phones are just one commonly sought-after source of digital evidence where extracting and interpreting data can be a challenge. Phone manufacturers and app developers rarely share their code publicly, which places a heavy burden on analysts to employ reverse engineering or crowdsourced research. However, even with proper extraction tools and methods, the data cannot be taken at face value for many different reasons. Intrinsic validity must always be critically assessed. Not just “did this tool correctly extract data from the phone” but “is the extracted data accurate?”
The SANS Institute published a white paper, authored by employees of major mobile forensic toolmakers, emphasizing (among other best practices) the importance of validating results in mobile forensics. Thorough mobile forensic work requires identifying the original sources of extracted data, validating timestamps, taking notes, and conducting research to fully understand what’s been extracted. These steps matter to industry experts, but they’re not universally followed. Without oversight and accreditation, it is up to individual analysts to decide what corners should be cut when time and funding are limited.
This recent line of analyst testimony is all too common: "I was just taking what I found in the [mobile forensic report] and taking a screenshot and putting it on a PowerPoint." The witness did not understand or validate their work, instead relying entirely on the tool to do their job. Lazy and reckless forensic analyses will lead to wrongful convictions. They tarnish the entire field’s reputation and undermine the efforts of those who diligently validate their work. Digital forensic evidence should be subject to oversight and accreditation requirements, just as any other forensic discipline.
Brian Cummings is a Digital Evidence Resource Attorney with the New York State Defenders Association and works in private practice.
Upcoming Events
February 8, 2024
Digital Part I: Geofence Warrants (NYSDA) (Virtual)
February 14-17, 2024
ABA TECHSHOW 2024 (ABA) (Chicago, IL)
February 27 - March 7, 204
Magnet Virtual Summit 2024 (Magnet Forensics) (Virtual)
February 29, 2024
Digital Part II: Cell Site Location Information (“CSLI”) and Call Detail Records (“CDR”) (NYSDA) (Virtual)
February 29 - March 7, 2024
SANS OSINT Summit & Training (SANS) (Arlington, VA & Virtual)
March 11, 2024
Digital Part III: Litigating Against Algorithms, Hidden Technology, and the Machine Witness (NYSDA) (Virtual)
April 11-12, 2024
2024 Forensic Science and Information Technology Institute (ABA) (San Diego, CA)
April 15-17, 2024
Magnet User Summit 2024 (Magnet Forensics) (Nashville, TN)
April 18-20, 2024
Making Sense of Science XVII: Forensic Science & the Law (NACDL) (Las Vegas, NV)
June 4-6, 2024
Techno Security East 2024 (Wilmington, NC)
July 12-14, 2024
HOPE XV (Queens, NY)
October 19, 2024
BSidesNYC (New York, NY)
Small Bytes
Police must return phones after 175 million passcode guesses, judge says (Ottawa Citizen)
Child Abusers Are Getting Better at Using Crypto to Cover Their Tracks (Wired)
Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up. (Ars Technica)
Is Your Local Police Department Using Fusus AI-Enabled Cameras? Find Out Here (404 Media)
Each Facebook User Is Monitored by Thousands of Companies (Consumer Reports)
Internet Privacy Is A Disability Rights Issue (Tech Policy Press)
How Chicago Became an Unlikely Leader in Body-Camera Transparency (ProPublica)
Texas man sues Macy’s and Sunglass Hut parent over wrongful arrest linked to facial recognition (CNN)
Inside a Global Phone Spy Tool Monitoring Billions (404 Media)
Goodbye for Now to the Robot That (Sort Of) Patrolled New York’s Subway (NY Times)