
July 12, 2021
Welcome to Decrypting a Defense, the monthly newsletter of the Legal Aid Society’s Digital Forensics Unit. This month, we discuss the Department of Homeland Security’s use of a surveillance technology at the border and the customer data collected by Amazon. Benjamin Burger reviews the recent Supreme Court decision in Van Buren v. United States, which interprets the Computer Fraud and Abuse Act. In the Ask an Analyst feature, Lisa Brown explains how infrared cameras work and their affects on video recordings, particularly in low light conditions.
The Digital Forensics Unit of the Legal Aid Society was created in 2013 in recognition of the growing use of digital evidence in the criminal justice system. Consisting of attorneys and forensic analysts and examiners, the Unit provides support and analysis to the Criminal, Juvenile Rights, and Civil Practices of the Legal Aid Society.
In the News
Surveillance Technology at the Border

The Los Angeles Times reports that the federal government has deployed a new mobile app, CBP One, at the border which relies on facial recognition and geolocation data to collect and store personal information on asylum seekers. Asylum seekers who were returned to Mexico under a Trump administration policy, Remain in Mexico, use the app to submit information, including faceprints, to Customs and Border Protection. According to the report, border officials have amassed an image database of over 70,000 asylum seekers. Photos submitted through CBP One can be compared to the images in the database and indicate whether the applicant’s case is active and how long they have been waiting for asylum. CBP has utilized facial recognition technology for the past decade, with middling results that prove the technology unworkable or ineffective.
CBP often pushes the boundaries of surveillance and privacy at the border. Courts have long recognized the border exception to the Fourth Amendment, which allows for routine searches at the border without probable cause. This is based on the Federal government’s “plenary” power to collect customs and prevent contraband from entering the country. CBP has used this authority to aggressively search travelers who arrive at border crossings, including airports. For example, basic device searches, where border agents manually inspect the data on a cellular phone, have grown from approximately 3,000 searches in 2012 to over 28,000 searches in 2018. Courts have done little to protect privacy at the border, with the United States Court of Appeals for the First Circuit recently endorsing CBP’s wide ranging policy on digital device searches. As Fourth Amendment protections are at their nadir at the border, the government has an increased ability to “test” surveillance technologies before seeking to expand them across the country. However, the current state of the federal judiciary suggests that privacy protections at the border will only get weaker.
Additional information on border searches and CBP/ICE surveillance can be found at these sites: Brennan Center for Justice, Electronic Frontier Foundation, Immigrant Defense Project, and Surveillance Technology Oversight Project.
Amazon Primed to Collect Data on Customers

Amazon is a consumer behemoth that sells everything from books to groceries. However, behind the scenes, Amazon also collects a massive amount of data on their customers. Virtually everything a customer does on Amazon’s website or through its devices is collected, tracked, and stored for the purpose of selling more products. As WIRED UK reported, Amazon attempts to learn everything they can about a customer to recommend products and get a sense of an individual’s personal shopping habits. The website also uses location information to ensure that people get their deliveries. As the report explains, individuals should assume that everything a person does on Amazon’s website, apps, or products is retained by the company. This includes everything from orders to Alexa requests. One journalist, who requested his data from Amazon, determined that the website logged the day and time a specific page was visited, the IP address, device used, geolocation from the IP address, and the internet service provider. Kindle devices log information like the amount of reading time and highlights made to book text. Ring doorbells store motion-detected video and anything an individual does with the app. Although data collection may result in an easier consumer experience, it has serious privacy implications. This is especially true in light of Amazon’s entry into the healthcare field through devices like the Amazon Halo.
So what can an individual do to limit the amount of data shared with Amazon? First, Amazon devices equipped with Alexa or Ring contain privacy settings that can prevent Amazon from storing video or audio. Second, consumers can turn off personalized ads and third-party advertising cookies. Third, customers can opt out of Amazon’s Sidewalk Network. Finally, those most committed to privacy should delete their Amazon account.
Technology companies, whether they sell consumer products like Apple and Amazon, or provide social media services like Facebook, operate in a mostly unregulated economy of personal data. Many privacy advocates have advanced legal changes which would protect consumer data and limit the commodification of individuals. Amazon, like other companies, is just exploiting a regulatory vacuum. Unfortunately, until the law catches up with the reality of today’s internet economy, consumers will be forced to look our for themselves.
In the Courts
U.S. Supreme Court Reverses Lower Court and Limits the Computer Fraud & Abuse Act

The United States Supreme Court issued a major ruling [PDF warning] interpreting the Computer Fraud and Abuse Act (CFAA), limiting the scope of the law. The CFAA criminalizes unauthorized computer access. The language of the statute includes two related acts that constitute unauthorized access: “access without authorization” and “exceeding[ing] authorized access.” In Van Buren v. United States, the Supreme Court ruled on what these phrases mean in the context of the statute.
In the case, Nathan Van Buren, a police officer, accepted money to search a government database for information. Although Van Buren was authorized to use the database, he was limited to only accessing it for work-related purposes. He was subsequently convicted of violating the CFAA and appealed.
The Court adopted a “gates-up-or-down” test to determine if the CFAA had been violated and explained that “one either can or cannot access a computer system, and one either can or cannot access certain areas within the system.” Specifically, for Van Buren, because he was granted access to the database, the limitation on how he could use it was not a closed gate under the CFAA. However, if a non-authorized user hacked into the database or used portions of the database that they were not granted access to, they could be liable under the statute.
Van Buren narrows the reach of the CFAA, but leaves open a number of questions as to how lower courts should determine if a gate is “up” or “down” in the context of a computer or website. A number of businesses, like facial recognition company Clearview AI, scrape data from the internet in violation of website terms of service. Van Buren appears to foreclose criminal liability for exceeding the limits of a term of service because there is no “gate” blocking access to the data. Instead, the CFAA can be thought of as a trespass statute. Once permission is given to a computer or website, the user is allowed to use it as they see fit without violating the CFAA.
Van Buren is similar to other recent Supreme Court cases that are trying to reconcile the technological society we live in with legal principles developed over hundreds of years. In the case of Van Buren, the Supreme Court narrowed the scope of the CFAA and limited the power of law enforcement to criminalize routine use of computers and the internet. However, the irony of Van Buren reversing a conviction of a police officer, who violated the privacy rights of others, is not lost on us.
Ask an Analyst
I have a case with a bunch of surveillance video clips from the inside and outside of a bar. When my client is inside, it appears he’s wearing a white shirt. Out in front of the bar he’s wearing a blue shirt. Is there any explanation other than my client changing clothing?
- A.P.
A: Many surveillance cameras use infrared (IR) light in locations with low visible light, such as the inside of a bar. The resulting video may be black and white, or it may contain some color if it was captured with a mix of infrared and visible light. But beware! When infrared light is used, the color of some clothing, hair and other objects will not be reproduced accurately.
Here’s an example of a surveillance camera using infrared light capturing an image of a man at night.

While the video doesn’t display color, you may be tempted to make certain assumptions about the clothing color. For example, the shorts appear to be “white,” or “light in color.”
Here’s an image from the same camera a few hours later when it’s light out. The camera is no longer using infrared light because there is sufficient natural light. It’s the same man, wearing the same clothes.

Now we see in visible light that the shorts are dark grey. That infrared image was misleading!
How can you tell if a camera uses infrared light? One clue would be the video switching between color and black and white when the lighting conditions change (at dusk, dawn or when lights are switched off or on). Also, you can often tell if a camera contains IR technology simply by looking at it. The infrared LEDs are typically visible around the camera’s lens.

Do you have a question about surveillance video, or another digital forensics topic? Please send your questions to AskDFU@legal-aid.org and we may feature your question in an upcoming issue of our newsletter.
— Lisa Brown, Digital Forensic Analyst
Small Bytes
Amazon Ring’s neighborhood watch app is making police requests public (Reuters)
The FBI is trying to get IP addresses and phone numbers of people who read a USA Today article (The Verge) (Follow up: Justice Department withdraws FBI subpoena for USA TODAY records ID'ing readers (USA Today))
The Criminals Though the Devices Were Secure. But the Seller Was the F.B.I. (New York Times)
She Sent Her iPhone to Apple. Repair Techs Uploaded Her Nudes to Facebook (Vice)
Honolulu Police Used a Robot Dog to Patrol a Homeless Encampment (Vice)
Congress Introduces a Bill That Would Ban Facial Recognition Indefinitely (Vice)
Facial recognition systems are denying unemployment benefits across the US (Yahoo)
There Are No Laws Restricting “Stingray” Use. This New Bill Would Help. (BuzzFeed)
The False Comfort of Human Oversight as an Antidote to A.I. Harm (Slate)
Instructions Show How Cops Use GrayKey to Brute Force iPhones (Vice)
Kansas Court Rejects Government’s ‘Reverse Warrant,’ Sets Ground Rules For Future Requests (Techdirt)
Maine law restricts facial recognition technology statewide (AP News)
Citizen App Says It Will Get Access to Encrypted Police Comms (Vice)
Civil rights group sues for records on use of facial recognition at New York prisons (New York Post)
A Government Watchdog May Have Missed Clearview AI Use By Federal Agencies In A New Report (BuzzFeed)
Documents Show Police Virtual Reality Training for ‘Mentally Ill Subjects’ (Vice)