Surveillance as Digital Detention, Age Verification Harm, POST Act Updates, The Algorithmic Informant & More
Vol. 6, Issue 5

May 5, 2025
Welcome to Decrypting a Defense, the monthly newsletter of the Legal Aid Society’s Digital Forensics Unit. This month, Christine Farolan discusses the surveillance of immigrants as a form of digital detention. Laura Moraff covers the issues with required age verification for online communications and communities. Gregory Herrera examines three recently passed NYC Council bills to update the POST Act. Guest columnist Adam M. Elewa analyzes the erosion of Fourth Amendment protections by automated algorithmic reporting.
The Digital Forensics Unit of The Legal Aid Society was created in 2013 in recognition of the growing use of digital evidence in the criminal legal system. Consisting of attorneys and forensic analysts, the Unit provides support and analysis to the Criminal Defense, Juvenile Rights, and Civil Practices of The Legal Aid Society.
In the News

Surveillance as Digital Detention
Christine Farolan, Digital Forensics Staff Attorney
In early March, we covered the seemingly endless ways Immigration and Customs Enforcement surveils people, including its recent request for contractors to monitor social media for so-called threats to ICE. “[I]t’s clear that immigrant activists will be increasingly targeted,” we wrote.
Since then, we’ve witnessed the arrest and detention of Mahmoud Khalil, Rumeysa Öztürk, and Badar Khan Suri. We’ve watched Momodou Taal and Ranjani Srinivasan decide that it was safer to leave this country than risk being detained themselves. Thankfully, at the time of writing, we’ve also seen Mohsen Mahdawi’s release from custody.
Of course, the surveillance of non-citizens and their communities in general – not simply those who choose to express their beliefs – continues apace. Since 2004, BI Inc., a subsidiary of the private prison company Geo Group, has run ICE’s Intensive Supervision Appearance Program (ISAP), which employs three types of tech as “Alternatives to Detention.” These methods consist of telephone check-ins and the comparison of those calls to a biometric voice print, GPS trackers worn on the ankle or wrist, and the SmartLINK mobile app.
The wrist-worn GPS tracker was the fastest-growing type of electronic monitoring used by ICE by this time last year. It uses biometric facial comparison, which BI’s website amusingly seeks to distinguish as a harmless method of verifying identity, rather than scary facial recognition that’s used to surveil. The difference between a user unlocking their own phone by holding it up to their face and police running a photo of someone through facial recognition software is arguably the aspect of consent or participation. But that “consent” feels irrelevant when a person is compelled to use this tracker to report to ICE or risk detention.
Through these technologies, BI collects personally identifying info, biometric and health data including facial images, voice prints, info on disabilities, and info on pregnancy and births, location data, phone numbers for close contacts, vehicle and driver data, and data regarding one’s home, neighborhood, and community ties. Records obtained by Mijente, Just Futures, and Community Justice Exchange indicate that at least some of this data is stored for 75 years, and perhaps indefinitely. The Department of Homeland Security has stated that data collected from 18- and 19-year-olds as part of ISAP’s Young Adult Case Management Program is retained permanently. (YACMP was apparently terminated [PDF] last year; it’s unclear what that means for people’s stored data.) Read Mijente et al’s fact sheet [PDF] from their 2022 lawsuit against ICE for more.
Mijente et al wrote, “ICE’s ultimate goal with ISAP is not a more humane program, but to expand the agency’s punitive control over the lives and autonomy of Black, brown and immigrant communities.” In much the same way that probation, parole, and electronic monitoring extend the carceral system into our communities, the methods by which immigrants are surveilled bring the border to our backyards (or blocks), regardless of where we live.
Nakeema Stefflbauer said it best in an essay last year, and her words apply broadly to surveillance tech: “[T]he irony of our increasing reliance on ‘AI’ technologies may be that soon, everyone will develop some understanding of how it feels to live in America while Black.” Demanding certain privacy practices protects all of us by raising the standard of what the public expects. Surveillance should matter to everyone, including those who feel they have nothing to hide.
Age Verification: Discord & Danger
Laura Moraff, Digital Forensics Staff Attorney
In response to legal requirements [PDF] in the United Kingdom and Australia that are purportedly meant to protect children but may actually [PDF] do [PDF] the opposite, Discord has joined other popular social media platforms in experimenting with age verification. Some users in the United Kingdom and Australia have been prompted to verify their age before either viewing content flagged by Discord’s “sensitive media filter” or changing their settings so that they can see “sensitive content.”
Discord is giving users two options to verify their age. The “Face Scan” option prompts users to scan their face with their webcam or phone camera, and then submit the image. According to Discord, the Face Scan operates on-device, and neither Discord nor its vendor stores the data. The “Scan ID” option prompts users to scan a QR code, take a photo of their government ID using their phone camera, and then submit the image. Discord says it deletes the photo upon verification.
The user is supposed to receive a message from the official Discord account telling them which age group they are in within a few minutes. If the verification process concludes that a user is too young to have a Discord account, the user will be automatically banned and will need to use Discord’s underage appeals process to challenge the ban.
Discord is not alone in experimenting with its users’ faces and personal information. Instagram requires some users to verify their age by uploading ID, submitting a video selfie, or having three mutual followers vouch for the user’s age. And the platform recently started using AI to attempt to identify accounts of users who have misrepresented their age and automatically convert them into teen accounts—which come with restrictions on private messages and “sensitive content.” In response to a Louisiana law [PDF], Pornhub has required users who connect to the site from Louisiana to verify their age using LA Wallet, a government app that allows users to get a “legal digital replica” of their driver’s license. A recent study found that searches for Pornhub in Louisiana decreased by 51% after it implemented the age verification requirement. Searches to another Porn website that did not implement age verification measures, XVideos, rose by 48.1%.
While all of these age verification methods operate differently, they all pose risks to users’ privacy, security, and freedom of expression online. Even where companies like Discord attempt to assure users that their data will not be retained after authentication is complete, the data is still vulnerable to security breaches, the policy could quietly change at any time, and users may prefer to opt out of discourse on the platform rather than scan their face or provide their ID—both of which can communicate sensitive information that people don’t wish to associate with their online communications.
Some people will also be blocked from platforms and/or “sensitive” content on them because the age verification process concludes that they are younger than the prescribed minimum age. The well-known biases with facial recognition persist when faces are scanned for the purposes of age verification—meaning people of color, women, and nonbinary people are more likely to be misclassified and thereby excluded from certain online discussions. Misclassifications are also [PDF] more likely to occur for people wearing eyeglasses or who are close to the age cutoff. People with certain types of disabilities are also at greater risk of being mis-aged. A 25-year-old woman with dwarfism was misclassified as a minor and had her TikTok account banned and her content deleted.
The never-ending moral panic around children’s exposure to “sensitive content” has long clashed with privacy and the freedom of expression. But the federal government’s recent crackdowns on dissent underscore the importance of anonymous and pseudonymous speech online. When social media activity is being used as a basis for visa revocation or deportation and withholding immigration benefits, we should be wary of any efforts to collect more information about people’s identities when they participate in online communities. As Eric Goldman notes in his forthcoming paper on age verification laws, which he aptly terms “segregate-and-suppress” laws, “[a]s age authentication becomes widely deployed across the Internet, governments will inevitably coopt the process to increase their control over their constituents.” As the Trump administration works to block transgender people from getting passports, restrict children’s access to books touching on race and gender, and prevent employees from using terms like “activism,” “bias,” and “social justice,” building out an infrastructure that facilitates both the identification of those seeking certain types information and suppression of that information is a dangerous move.
Policy Corner

POST Act 2: Loophole Boogaloo
Gregory Herrera, Digital Forensics Staff Attorney
On April 10, 2025, the New York City Council passed a package of three bills to strengthen the Public Oversight of Surveillance Technology (POST) Act by closing “loopholes and introduc[ing] new mechanisms for oversight.” They represent the tedious but crucial governance feedback loop in action after a groundbreaking piece of legislation like the POST Act: public hearings and subsequent legislative refinement to ensure full compliance.
The POST Act was first introduced by the City Council in 2017 in response to advocacy from community activists and civil rights groups about the expanding use of digital surveillance technologies by the New York Police Department (NYPD). For example, the New York Civil Liberties Union (NYCLU) revealed in 2016 that NYPD used cell site simulators, commonly referred to as Stingray devices after the name of a popular device model, over 1,000 times between 2008 and 2015. Cell site simulators essentially mimic cell towers, so they intercept cellphone signals within range, track cellphones’ precise location, and some can even intercept the contents of communications [PDF]. Until the POST Act’s passage in 2020, NYPD used cell site simulators without any written policy. Similarly, surveillance tech like facial recognition, ShotSpotter, social media monitoring, gang databases, automated license plate readers (ALPRs), drones, x-ray vans, and the Domain Awareness System were all used with little to no oversight [PDF]. These technologies implicate issues such as infringement on free speech and privacy, exacerbation of racial bias, lack of scientific validation/false positives, and lack of meaningful public debate.
The POST Act was significant because it was “the first law to oversee the NYPD’s use of surveillance technology” [PDF]. It mandated the NYPD to create and publish surveillance tech Impact and Use Policies (IUPs) that cover ten areas [PDF], including: descriptions of the technologies’ capabilities, guidelines for access or use, safeguards against unauthorized access, information about any training required for using the technology, and internal audits as well as oversight mechanisms.
The law also required the Department of Investigation’s Office of the Inspector General for the NYPD (OIG-NYPD) to prepare annual audits of NYPD’s compliance with the POST Act. Its first report in November 2022 [PDF] outlined some deficiencies in the POST Act compared to other jurisdictions, reviewed the IUPs published up to that point. and identified NYPD’s compliance failures. Unlike similar legislation in at least seven states and nearly two dozen cities, the POST Act only required NYPD to disclose basic details [PDF] about its surveillance tech. So, while the report found that the IUPs largely complied with the basic requirements of the POST Act, it also found that NYPD used boilerplate language [PDF], narrowly interpreted the potentially disparate impact reporting requirement [PDF], and stymied oversight [PDF] by grouping arguably distinct technologies under a single IUP. Ultimately, the report issued 15 recommendations [PDF] for improvement.
And the City Council answered the report’s call, adopting more than half of OIG-NYPD’s recommendations across its three bills. Intro 480 requires that NYPD publish IUPs for each distinct surveillance technology used, fully identify each external entity by name that receives data from each technology, report on specific safeguards to prevent unauthorized dissemination of data, and report on evaluation of potential disparate impacts on protected groups stemming from using each surveillance technology. Focusing on facial recognition, Intro 233 mandates NYPD to publish its usage policy online as well as conducting annual internal audits that must be shared with OIG-NYPD and published online. Lastly, Intro 168 provides OIG-NYPD with the power to request from NYPD an itemized list of all surveillance technologies currently in use and regular semiannual updates on newly acquired or discontinued tech. NYPD largely opposed these requirements, arguing that revealing too much about their surveillance tech would blunt their effectiveness and that publishing an IUP for each technology would be an administrative headache.
However, these bills are not yet law. Mayor Eric Adams has 30 days from their passage to either sign the bills into law or veto them. These bills are necessary improvements to ensure the NYPD meaningfully follows the spirit of the POST Act.
Expert Opinions
We’ve invited specialists in digital forensics, surveillance, and technology to share their thoughts on current trends and legal issues. Our guest columnist this month is Adam M. Elewa.
The Algorithmic Informant: Automated Reporting and the Erosion of Fourth Amendment Protections
How the “private search doctrine” should apply to information service providers such as Meta and Alphabet—vast repositories of sensitive and profoundly revealing information—remains unresolved in most jurisdictions. By way of background, the private search doctrine permits the government to use evidence obtained by a private individual (i.e. not employed or directed by law enforcement) in criminal prosecutions even though the evidence was collected without any legal process, such as a search warrant. This conclusion is logical since the Fourth Amendment’s suppression remedy, and the U.S. Constitution more generally, is only meant to deter government misconduct, which is typically not implicated by a private search. Even a flagrantly illegal one.
Although contributors to this newsletter have written about this topic previously, noting pro-Fourth Amendment outcomes in various appellate courts, I believe the broader implications of this issue for the future of the Fourth Amendment are not widely appreciated. As discussed in this article, depending on how this issue is resolved, the private search doctrine, a supposedly “narrow doctrine with limited applications,” could be transformed into an exception to the warrant requirement more pernicious to digital privacy rights than the third-party doctrine. U.S. v. Wilson, 13 F.4th 961, 968 (9th Cir. 2021). The gravity of what’s at stake should be impressed upon any judge who may have to decide this issue, especially since this issue arises most often in the context of emotionally charged fact patterns involving child sexual abuse material (“CSAM”). Bad facts often make bad law, and the negative repercussions of bad law in this context can hardly be overstated.
Although the premise underlying the private search doctrine is straight forward, the scope of its reach is nebulous. Outside of the digital realm, courts have permitted law enforcement to use physical evidence provided to it by a private individual insofar as the government’s review of that evidence did not exceed the scope of the search conducted by the private individual. For example, the Supreme Court has denied the government the ability to use the contents of film prints where the private individual merely reviewed the film’s packaging which implied or generally described the contents of the film itself. See Walter v. United States, 447 U.S. 649 (1980). The government, in the court’s view, had conducted its own—more expansive—unwarranted search by looking deeper into the evidence turned over by the private individual by watching the film.
Most contemporary private search cases involving companies like Meta and Alphabet follow a similar fact pattern. A user of an online service decides to upload photo or video based CSAM to the platform. That material is automatically scanned by the online service provider using various automated tools. The system tags the material as CSAM, which triggers a legal obligation to forward the material to the National Center for Missing and Exploited Children (“NCMEC”). NCMEC, after an automated or a human-led review, sends the material to a geographically relevant law enforcement agency for further action.
Given this fact pattern, the question often boils down to whether NCMEC (arguably an arm of the government) or the final reviewing law enforcement agency conducted a ‘more expansive search’ than the private actor (i.e. Meta, Alphabet, etc.). Some courts have held that where the private actor used only automated tools to conduct its review a later human-led review by the government should be considered more expansive, outside the scope of the private search doctrine, and thus an unwarranted search under the Fourth Amendment. E.g. U.S. v. Wilson, 13 F.4th 961.
This analysis has implications beyond automated reporting of CSAM. Although the law specifically requires information service providers like Meta and Alphabet to report CSAM to the government via NCMEC, there is little preventing them from voluntarily searching their platforms for any manner of crime or offense against the State. They would then be able to transmit this information directly to the government for use in a criminal prosecution, civil deportation proceeding, or any other extra judicial matter.
Absent clear evidence of a state-directed search, a narrow reading of the private search doctrine would prohibit suppression of the digital evidence under the Fourth Amendment. Although civil remedies for such disclosures are provided for under the Stored Communications Act, these remedies could theoretically be abridged by obtaining ‘consent’ via dense and often unscrutinized terms of service. See 18 U.S. Code § 2702(b)(3). It does not take much imagination to see why companies like Meta, Alphabet, Amazon, or X (f/k/a Twitter) would so amend their terms of service or ‘voluntarily’ disclose information about its users to the U.S. Government. Even if for only the ‘limited purpose’ of civil deportation matters.
Defenders are thankfully not without persuasive Fourth Amendment-based legal arguments. In cases where the private actor employed automation to locate and transmit private records to the government, cases like U.S. v. Wilson are instructive: if the government agent is the first human to review the data, their search is outside the scope of the initial automated search.
Should the private actor include human-led review of the digital records, there may still be a basis to assert a client’s rights. The Supreme Court—including members who subscribe to an originalist reading of the constitution—have acknowledged that the court should take a pragmatic approach to the Fourth Amendment that seeks to preserve the “degree of privacy against government that existed when the Fourth Amendment was adopted.” Kyllo v. U.S., 533 U.S. 27 (2001) (J. Scalia). More recently, the Supreme Court has emphasized that the “central aim of the Framers was to place obstacles in the way of a too permeating police surveillance.” Carpenter v. U.S., 585 U.S. 296 (2018). These originalist principals can be read as requiring the court to look beyond the technical issue of human versus non-human review and ask the ultimate question: what would be left of the Fourth Amendment should the private search doctrine be expanded?
We may be approaching a constitutional inflection point. If courts expand the private search doctrine to bless every automation-led or terms-of-service-justified disclosure to the government, the Fourth Amendment risks becoming an empty shell—incapable of shielding even the most intimate corners of our digital lives. The next wave of mass surveillance won’t come through warrants or wiretaps—it’ll come from your client’s own devices, flagged by algorithms and delivered directly to the State.
Adam M. Elewa is a federal criminal defense attorney based in New Jersey and a former public defender. He teaches at the Stevens Institute of Technology, where his work focuses on the intersection of technology, surveillance, and systemic power.
Upcoming Events
May 5, 2025
Ethics in Social Media 2025 (PLI) (New York, NY and Virtual)
May 6, 2025
What is Work Worth? Exploring What Generative AI Means for Workers’ Lives and Labor (Data & Society) (New York, NY and Virtual)
May 15, 2025
Litigate Like a Machine: AI for Lawyers (NYS Academy of Trial Lawyers) (Virtual)
May 20, 2025
Decrypting a Defense IV Conference (The Legal Aid Society’s Digital Forensics Unit) (New York, NY) (Non-Legal Aid Society Registration) (Legal Aid Society Registration)
May 21, 2025
AI Analytics and Fourth Amendment Challenges (NYSDA) (Virtual)
June 2, 2025
Amped Connect US 2025 (Amped Software) (Wilmington, NC)
June 3-5, 2025
Techno Security & Digital Forensics Conference (Wilmington, NC)
June 26, 2025
Legal Intelligence Meets Artificial Intelligence: A New Era of Practice (NYS Academy of Trial Lawyers) (Virtual)
July 8-9, 2025
Harnessing AI for Forensics Symposium (RTI International & NIST) (Washington, DC)
July 11-12, 2025
Summercon (Brooklyn, NY)
August 7-10, 2025
DEF CON 33 (Las Vegas, NV)
August 15-17, 2025
HOPE 16 (Queens, NY and Virtual)
Small Bytes
UK creating ‘murder prediction’ tool to identify people most likely to kill (The Guardian)
Inside a Powerful Database ICE Uses to Identify and Deport People (404 Media)
This Company’s Surveillance Tech Makes Immigrants ‘Easy Pickings’ for Trump (NY Times)
Leaked: Palantir’s Plan to Help ICE Deport People (404 Media)
This ‘College Protester’ Isn’t Real. It’s an AI-Powered Undercover Bot for Cops (Wired)
Thieves took their iPhones. Apple won’t give their digital lives back. (Washington Post)
How to Protect Yourself From Phone Searches at the US Border (Wired)
Five Findings from an Analysis of the US Department of Homeland Security’s AI Inventory (Tech Policy Press)
Google Messages can now blur unwanted nudes, remind people not to send them (Ars Technica)
Ask the Experts: AI Surveillance and US Immigration Enforcement (Tech Policy Press)
An Employee Surveillance Company Leaked Over 21 Million Screenshots Online (Gizmodo)
How California sent residents’ personal health data to LinkedIn (The Markup)