aianthropicbarristerschamberschatbotscitationsclaudecommon lawcontemptFeaturedgenai

Senior English Judge Warns That Lawyers Who Use AI Must Check Their Legal Citations Thoroughly – Or Face ‘Severe Sanction’

from the professional-and-ethical-obligations dept

One of the legitimate criticisms of large language models, generative AI, and chatbots is that they produce hallucinations –- output that is plausible but wrong. That’s a problem in all domains, but arguably it’s a particularly serious one in the field of law. Hallucinated citations undermine the entire edifice of common law, which is based on precedent, as expressed in previous court decisions. This isn’t a new problem: back in May 2023 Techdirt wrote about a case involving a lawyer who had submitted a brief in a personal injury case that had a number of made up citations. Nor is it a problem that is going away. A recent case involved a lawyer representing the AI company Anthropic, who used an incorrect citation created by the company’s Claude AI chatbot in its current legal battle with music publishers.

Similar cases have been cropping up in the UK, and a High Court judge there has had enough. In a recent ruling, High Court Justice Victoria Sharp explores two cases involving hallucinated citations, makes some general observations about the use of AI by lawyers, and lays down their responsibilities if they do so.

One case involved a filing with 45 citations, 18 of which did not exist; in the other, five non-existent cases were cited. The court’s judgment [pdf] provides full details of how the hallucinations came to light, and how the lawyers involved responded when they were confronted with the non-existent citations. There is also an appendix with other examples of legal hallucinations from around the world: five from the US, four from the UK, three from Canada, and one each from Australia and New Zealand. But more important is the judge’s discussion of the broader points raised. Sharp begins by pointing out that AI tools can certainly be useful, and are likely to become an important tool for the legal profession:

Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. A recent report into disclosure in cases of fraud before the criminal courts has recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.

But that positive view comes with an important proviso:

Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.

This is not to be understood as a vague call to do better. Sharp wants to see action from the UK’s legal profession beyond the existing guidance from regulatory bodies (also discussed by her):

There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers [groups of barristers] and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence.

And for those who fail to do this, the court has a range of punishments at its disposal:

Where those duties are not complied with, the court’s powers include public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police.

In one of the two cases discussed by the judge in her ruling, a serious punishment was not handed out to a lawyer who had failed to check the citations, despite sufficient grounds for doing so. Sharp gave a number of reasons for this in her judgment, including:

our overarching concern is to ensure that lawyers clearly understand the consequences (if they did not before) of using artificial intelligence for legal research without checking that research by reference to authoritative sources. This court’s decision not to initiate contempt proceedings in respect of Ms Forey [the lawyer in question] is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction.

It will probably take a few “severe sanctions” being meted out to lawyers who use hallucinated precedents without checking them, before the profession starts taking this problem seriously. But the ruling by Sharp is a clear indication that, while English courts are quite happy for lawyers to use AI in their work, they won’t tolerate the errors such systems can produce.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: , , , , , , , , , , , , ,

Companies: anthropic

Source link

Related Posts

1 of 30