Editorial | The AI protocols in courts
Last month’s promulgation by Chief Justice Bryan Sykes of protocols for the use of AI-generated documents in court proceedings advances his long-standing advocacy for the use of artificial intelligence (AI) technologies for a more efficient, and timely, delivery of justice in Jamaica.
So far, most attention has been paid to what the document says about what lawyers can or cannot do in their AI assisted submissions. Which is understandable.
However, Justice Sykes must quickly push for the next phase of the project – a broader use of AI technologies in the management of the court system, from the filing of documents to the scheduling of cases and helping judges to analyse briefs.
This, of course, will require additional investment in the judicial system, to which the obvious retort will be about the many sectors that compete for Jamaica’s limited resources.
That is fair. Except that, in considering investment priorities, justice ought not to be placed merely on the social welfare side of the ledger. A good, sound, fair, competent and efficient justice system, in concert with effective law enforcement, as this newspaper has consistently stressed, is a critical part of the economic infrastructure, vital to sustainable growth.
Crime, as we often stress, is a drag on production. That is a cost to the economy. And that cost can’t be redeemed at near its full value if, while law enforcement agencies catch and indict criminals, their cases meander in a log-jammed or inefficient judicial system.
Additionally, the courts litigate disputes between firms, between firms and their customers, and between individuals who disagree about business ventures. Slow resolution of disagreements add to the transaction costs of doing business. That, in turn, can cause reputational damage to a country as being a place to invest and conduct business.
CAUTION TO LAWYERS
The major focus on the chief justice’s directive on the use of AI has, so far, been its caution to lawyers about how they apply the new technologies, on pain of sanctions if they are careless or incompetent.
This is understandable, given the several cases that have emerged around the world, where lawyers used generative AI tools to prepare submissions that cited authorities or precedents cases that didn’t exist. They were manufactured by the AI tools.
In one matter in Australia last month, a lawyer was ordered by the Victoria Legal Services Board to undergo two years of supervised practice (he and his supervisor will have to report to the board quarterly) after providing a judge with a list of bogus prior cases the court had requested as part of an enforcement dispute with his estranged wife last year. Justice Amanda Humphreys and her aides couldn’t find the cited cases in any credible law reports. They were AI hallucinations.
Over 20 cases have emerged in Australia in recent times where AI systems fed lawyers bad information, which they presented to courts.
In another case in England this year, 18 of the 45 case laws (40 per cent) cited by a barrister in an £89-million claim against the Qatar National Bank were fabrications. The barrister’s client, improbably, declared responsibility for using the AI tool that generated the falsehoods.
Clearly, dangers exist in a total reliance on AI tools, without human review to test the validity of what has been generated. In that respect, it is not only lawyers who present bogus information to courts who have to be careful. Judges, too, as was Justice Humphreys in Australia, have to be vigilant.
PART OF THE JOB
In any event, that is part of the job of a judge. No judge is expected to take a lawyer’s cited case at face value without ascertaining its facts, appearance in a verified law report, and critically being clear about the full context of argument and its relevance to the case to which it is being applied.
Obviously, when lawyers – who are accountable for the information they provide to judges – mislead courts, intentionally or otherwise, there are consequences for their misbehaviour. They can face sanctions ranging from censure by the court to being held for contempt or struck off by the profession’s disciplinary body.
Nonetheless, given the recognised value in the use of AI tools in including efficiency gains (and even the potential clarity of documents), further and better particulars would be welcome on the prohibition to using the tool to “draft, or alter affidavits or witness statements intended to reflect a person’s direct knowledge, belief or opinion”. What if a witness’ jumbled presentation of events were fed into the tool to draft a coherent statement?
Similarly, there are likely to be questions on the requirement for disclosure, if “any part of a document submitted to the court has been prepared using AI”. Lawyers who use the tool may be wary of being cast as being in a lower tier of professionals.
Inside the courts, however, there is a great value in AI systems. Judges can use these tools to develop synopses of cases, and to quickly find, from reputable law reports, cited cases. These technologies and digital libraries have to be paid for.
Further, courts around the world, especially in developed countries, are increasingly applying AI technologies to improve their case and data management, as well as the recording of proceedings.
All of which add to a more efficient judicial system.