The Scarlet AI: Is there an AI Credibility Deficit in Court coming?

In many business circles, AI fluency is the new badge of honor. Being a “smooth prompter” or confidently iterating on AI-generated content can earn you what philosopher Miranda Fricker terms a “credibility excess.” We see the polished output and assume the user is smarter, more technically savvy, and a more rigorous thinker. Frick’s work was summarized as:

“Either we give someone a credibility excess, more credibility than they’ve earned, or we give them a credibility deficit, less credibility than they deserve. These credibility deficits and excesses are how testimonial injustice shows up. They’re the mechanisms through which prejudice harms people as knowers. When prejudice causes systematic credibility deficits or excesses, that’s what we’re witnessing with testimonial injustice is in action.”

A recent podcast episode from Machines & Meaning exploring Fricker’s work highlighted this exact phenomenon. Host Angel Evan posited that we over-assign credibility to those with “mechanical literacy,” while those who are more cautious or ask critical questions are hit with a “credibility deficit”—they are unfairly seen as less competent.

When we see mechanical literacy, smooth prompting, impressive outputs, confident iteration, we might be tempted to over-assign credibility, and maybe not just about AI., but about their judgment more broadly. Maybe we assume they’re just smart, or overall technically savvy, that they’re rigorous thinkers, and that in general, they just get how things work.

Meanwhile, we under-assign credibility, what Fricker would call a credibility deficit, when someone lacks the visible performance. Maybe because they’re more cautious, ask more questions, and push back without blindly adopting new technology. These credibility excesses and deficits are what cause testimonial injustice.”

This may be true in boardrooms and marketing departments. But I see the exact opposite brewing in one of the nation’s most critical systems: the US courts.

Instead of a credibility excess for using AI, the legal world is rapidly developing a profound AI credibility deficit.

The reason is simple: the “epidemic” of AI hallucinations. High-profile cases where attorneys submitted briefs citing non-existent, AI-invented legal precedents have sent a shockwave through the profession. The cost of such an error isn’t just professional embarrassment; it’s a direct threat to the integrity of the judicial process.

The institutional backlash has been swift and severe.

Entire court systems are becoming deeply suspicious of all AI use, not just the irresponsible kind. A federal court in Ohio, for instance, has outright banned the use of AI in preparing pleadings. Other state and federal courts now mandate a formal declaration, forcing attorneys to disclose exactly whether and how AI was used in their research and writing.

This brings us to a troubling paradox. The very act of disclosure, intended to promote transparency, may soon function as a scarlet letter.

I sense we are on the verge of a new form of “testimonial injustice,” just as Fricker defined. I believe any brief marked “AI-Assisted” will now face a higher, not lower, bar of scrutiny.

This leads to the central question: Will attorneys who certify they’ve used AI automatically be handed a “credibility deficit” before a judge even reads the first page?

In the legal world’s rush to guard against AI’s flaws, it may be the transparent, human user—not the technology itself—who pays the professional price. Their work may be inherently distrusted, precisely because they admitted to using a tool the system (or some individual judges) no longer trusts or never took time to deeply understand.

I can seriously envision further bans on the use by entire court systems. While this would truly be unwise, I am perhaps more concerned about mandatory disclosure of AI use. Just because I responsibly use AI in drafting or researching a matter, why should my arguments before a court begin with a Credibility Deficit by the Scarlett AI.

Leave a comment