Referenced in the article is the Texas Judicial Branch’s Generative AI: Overview for the Courts presentation, which outlines how the technology could theoretically be used by lawyers, self-represented litigants or judicial officers.
This list includes using AI to guide users without a lawyer through legal processes; to help lawyers review judges’ previous rulings with resulting suggestions on how to tailor documents; and to give judicial officers recommendations about bail or sentencing. The presentation also cautions that “just because we can doesn’t mean we should,” outlining a variety of risks. The data on which generative AI was trained might be biased or it could produce inaccurate answers that might go uncaught if they aren’t carefully reviewed, among other problems.
Meanwhile, some courts have already implemented rules around use of generative AI. One Texas judge issued a directive requiring attorneys to either attest that they’d validated AI-generated content through traditional methods, or that they’d avoided using the tool. “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them,” Judge Brantley Starr wrote. “These platforms in their current states are prone to hallucinations and bias … . While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.”