Not a new mechanism — a structural amplifier of three existing ones. Constructed Unfalsifiability. Unfalsifiable Entrenchment (#14) + Authority Without Foundation (#15) + Precedential Fossilization (#23). What IP law adds: the unfalsifiability is not emergent. It is designed.
In 2023, the forensic scientist William Thompson ran the same low-template DNA mixture through two probabilistic genotyping programs. STRmix returned a likelihood ratio of 24. TrueAllele returned a likelihood ratio of up to 16.7 million. Same DNA. Same case. A six-order-of-magnitude swing on a single sample.
Both programs are routinely admitted in American criminal trials. Their source code is proprietary. You can cross-examine the result. You cannot inspect the method.
Thompson's wasn't an isolated result. In a 2021 comparison by Cheng et al., STRmix and EuroForMix produced likelihood ratios within two orders of magnitude on 84% of mixtures — meaning in 16% of cases, the tools disagreed by more than 100-fold on the same evidence. Buckleton et al. (2024) later diagnosed the cause: the tools estimate allele-height variance and mixture proportions differently, and for false donors the divergences are largest near the decision threshold. The disagreements are not bugs. They are structural features of different modeling choices applied to ambiguous data. Because MCMC sampling is stochastic, the same tool given the same data will never return bit-identical results twice.
What the Courts Admitted as Fact
On March 26, 2026, the Third Circuit, in United States v. Anderson, upheld a TrueAllele result presented to a federal jury as a likelihood ratio of 11.5 trillion. The claimed false-positive rate: 0.005%. The source code: sealed. The defense's request for access: denied. Cross-examination at trial, the court wrote, was the appropriate remedy for any methodological concerns.
The reasoning was recursive: the tool is reliable because it has been independently validated; the validations are adequate because they were peer-reviewed; the peer review is meaningful because other courts have accepted it. Forty-two validation studies were cited. Eight had undergone peer review. Many were authored by or in partnership with the vendor.
The trial cross-examination standard presumes you know what the methodology is. When the source code is a trade secret and the validation studies were conducted by the tool's own makers, the adversarial process is reduced to contesting outputs you cannot inspect.
The Green Light and the Red Light
Since 2021, American courts have produced a visible split. Lower courts and state supreme courts occasionally reject opacity as a procedural failure. Higher federal courts have been steadily admitting. The asymmetry matters: federal precedent sets the floor for the rest of the system.
The red-light cases share a tell: they address procedural or contextual failures — oversight violations, reliability inquiries, wrongful arrests after the fact. None of them vacate the underlying admissibility standard. The green-light cases do the structural work. They make the opacity part of the law.
IP Law as Epistemological Barrier
Trade-secret law was built to balance commercial interests for products sold in markets. Applied to evidence in a criminal trial, it does something different. It converts a business decision — the vendor chose not to disclose the code — into a structural constraint on falsification. The evidence is not unfalsifiable because it resists correction on its merits. It is unfalsifiable because the legal pathway to inspection is foreclosed by a separate body of law.
When Mark Perlin, Cybergenetics' founder, agreed to a limited source-code review in State v. Pickett, he initially insisted defense experts could take notes but not copies. In federal proceedings, the wall has held. The Third Circuit in Anderson accepted that 42 validation studies — many conducted by or in partnership with the vendor — were enough. The defense never saw the 170,000 lines of code the tool's creator has described as "dense mathematical" in court filings.
"It's time to end the trade secret evidentiary privilege among forensic algorithm vendors."
— Brookings Institution, 2025
The Academy Has Diagnosed the Problem
In the last three months, four substantial publications have converged on the same structural diagnosis. They don't disagree. They don't equivocate. They name the same failure.
| Source | Diagnosis |
|---|---|
| Stanford Law School March 27, 2026 |
Criminal-justice entities lack the technical literacy to evaluate AI tools. Vendors market directly to practitioners without independent oversight. |
| Cyberjustice Lab, Montréal April 13, 2026 |
"How do you cross-examine an algorithm?" The breathalyzer parallel: same vendor resistance, now at higher stakes and lower transparency. |
| Farber, The Police Journal April 6, 2026 |
Systematic review of a decade of AI in policing. Substantial ethical and legal challenges on bias, opacity, due process. |
| Reliability and Admissibility of AI-Generated Forensic Evidence (arXiv) January 2026 |
Reproducibility deficits. Judicial acceptance varies with technical literacy. No standardized validation protocols. |
Add to this Brookings (2025) calling directly for the end of trade-secret evidentiary privilege, the Justice in Forensic Algorithms Act (reintroduced 2024, still not passed), and a 60-page Alabama Law Review treatment by Rowe and Prior arguing for procurement-stage algorithmic transparency. The scholarly infrastructure is ready. The legislative and judicial response is not.
The Tool Can Work. The Ratchet Is the Problem.
In November 2019, after nearly a decade in a Texas prison for a murder he did not commit, Lydell Grant walked out on bail. Six eyewitnesses had identified him. A jury had convicted him. TrueAllele reanalysis of the crime-scene DNA excluded him; a CODIS search then matched the profile to a different man, who confessed. The Texas Court of Criminal Appeals declared Grant "actually innocent" in May 2021. The same tool whose source code is sealed had correctly identified a false conviction.
The technology is not inherently broken. What is broken is the one-way ratchet in how courts treat it. Opacity is tolerated when the output incriminates and scrutinized reluctantly when the output exonerates. Grant was free because his lawyers had the resources to commission a reanalysis. Melkote & Nambiar (Duke L.J. Online, 2025) call this the access-to-justice dimension: indigent defendants are routinely denied funds to retain experts capable of challenging opaque forensic software, producing a two-tiered evidentiary system where the well-resourced can probe the black box and the rest cannot.
The Gap Is the Mechanism
The academy has identified the diagnosis. The judiciary is not catching up. It is accelerating in the opposite direction. Over the last twelve months, the Third Circuit, the Second Circuit, and Oklahoma's Court of Criminal Appeals have admitted algorithmic evidence while Stanford, Montréal, Brookings, and The Police Journal have been narrowing in on why that evidence cannot be meaningfully tested.
This is not a knowledge gap waiting to close. It is a structural mismatch between institutions that evaluate evidence on its merits and institutions bound by procedural rules that prevent them from inspecting what the evidence actually is. The courts don't ignore the diagnosis; they are not built to act on it. Daubert asks whether a technique is "generally accepted." General acceptance becomes circular when opacity is baked in.
That gap is where a 0.005% false-positive rate and a 700,000× disagreement between tools coexist without contradiction — in the courtroom record, if not in reality. Precedential fossilization (#23) tells us that once admitted, forensic techniques are hard to dislodge. Authority without foundation (#15) tells us institutional adoption substitutes for validation. Unfalsifiable entrenchment (#14) tells us hypotheses mutate to resist disproof. IP-shielded forensic algorithms combine all three and add a fourth move: the unfalsifiability is designed. A business decision is converted into an epistemological barrier by a separate body of law, and the courts treat the result as fact.
The breathalyzer took decades to pry open. The Boeing 737 MAX, another self-validated system, took a crash and a grounding. Forensic algorithms are at the stage where the diagnosis is public and the admissions are still accumulating. If the pattern holds, the correction will come the way it usually does — after the wrongful convictions have piled up high enough that they are no longer possible to ignore.
Sources: Thompson 2023 (J Forensic Sci) · US v. Anderson, 3d Cir. 2026 · CMBG3 analysis · State v. Pickett · Brookings 2025 · Stanford Law 2026 · Cyberjustice Lab 2026 · Farber 2026 (Police Journal) · arXiv 2026 · Rowe & Prior 2025 (Ala. L. Rev.) · Melkote & Nambiar (Duke L.J. Online 2025) · Justice in Forensic Algorithms Act (2024) · Innocence Project of Texas: Lydell Grant · Cheng et al. 2021