3 Oct 2024 Judge uses ChatGPT: A welcome development?

A noteworthy ruling by the Court of Gelderland in June 2024 has stirred the legal community. For the first time, a judge utilised ChatGPT, an AI language model, to make an estimation in a civil case. This raises significant questions regarding the admissibility and reliability of artificial intelligence (AI) in legal proceedings.

The case revolved around a dispute between neighbours concerning the loss of revenue from solar panels. One neighbour had installed a rooftop extension, resulting in the other neighbour's solar panels being permanently overshadowed. The judge was tasked with determining the extent of the financial damage. To ascertain the average lifespan of solar panels, the district judge consulted ChatGPT, which indicated an average lifespan of 27.5 years. Reactions to this approach ranged from cautious optimism to strong criticism.

Case background

The judge faced a challenging calculation. The lifespan of solar panels and the associated returns were relevant factors in assessing the damages. Although the claimant provided figures and calculations, the judge found them difficult to follow. Instead of consulting experts or other sources, he opted to make an estimate “with the assistance of ChatGPT.”

The use of an AI tool like ChatGPT is a novel phenomenon in Dutch case law and has sparked considerable debate. Was it responsible to use AI? If so, should the judge not have first allowed the parties the opportunity to respond?

The comparison to the 'Googling Judge'

This ruling echoes a decision from the Dutch Supreme Court in 2011, which introduced the term “Googling Judge.” In that instance, a judge independently searched the internet for facts not present in the case files. According to the Supreme Court, this violated the principle of hearing both sides, as parties must always be given the opportunity to respond to facts raised by the judge.

Although the judge in the solar panel case stated that ChatGPT was only used as a “supplementary” source, a similar discussion arose immediately. Should parties have the chance to comment on information generated by AI? Many legal professionals on platforms like LinkedIn advocate for this.

Criticism: ChatGPT is not an expert and the judge has a passive role

Legal experts warn that AI tools, such as ChatGPT, are not designed for legal purposes. Much of the criticism centres on the fact that ChatGPT is not a specialist source. While a search engine like Google directs users to existing information, ChatGPT generates answers without direct source attribution. This complicates the verification of the information's accuracy, which is particularly problematic in a legal context where facts must be meticulously substantiated.

Another fundamental objection to the use of ChatGPT by judges lies in the principle of judicial passivity. Under Dutch law, a judge may only rule based on the factual material presented by the parties involved. This means a judge must limit themselves to the information provided and not conduct independent inquiries. By using ChatGPT to gather information, such as the lifespan of solar panels, a judge may be acting contrary to this principle. The question arises as to whether the judge remains sufficiently passive or becomes actively involved in compiling the factual material. This could lead to an uneven playing field between the parties, especially if they have not been afforded the opportunity to contest the information provided by AI.

Positive perspectives: AI as a tool

Despite the concerns, there are also positive viewpoints regarding the use of AI in the judiciary. Some emphasise that AI, such as ChatGPT, can serve as a useful tool for judges in analysing complex information or processing large volumes of data. They argue that this technology can help make the legal process more efficient, provided judges are aware of AI's limitations and use these tools appropriately. There is a call for comprehensive training for legal professionals in the use of AI, so that technology can be applied responsibly and expertly.

What can we learn from this case?

The Gelderland case prompts reflection on the role of AI in the judiciary. On one hand, AI presents opportunities to support judges' work and expedite processes. On the other hand, we must be cautious about placing blind trust in technologies that are not yet fully developed and not specifically designed for legal applications.

In practice, it seems crucial to establish clear guidelines for the use of AI in the judiciary. For instance, the American Bar Association has already published guidelines regarding the use of generative AI by lawyers in the United States. A similar framework could be beneficial in the Netherlands.

Furthermore, it is vital that parties are given the opportunity to respond to information obtained by the judge through AI. This would help ensure that the principle of hearing both sides is not compromised. As the 2011 Supreme Court ruling illustrates, this is a fundamental principle that cannot be easily dismissed, even in an era of rapid technological advancements.

Conclusion: A development requiring clear guidelines

The use of ChatGPT by the judge in Gelderland represents an intriguing development, yet it highlights the considerable work still needed before AI can be reliably integrated into the judiciary. While AI can assist with certain tasks, its use must be carefully considered and remain within the boundaries of procedural law and the rules of evidence. We will, of course, closely monitor developments in this area.

SPEE advocaten & mediation Maastricht