Back to all insights
There is little doubt that AI will have a number of important effects on society generally, and on legal interactions in particular. In this article, we discuss a few of these effects and consider where the legal liability (i.e. the obligation to pay damages or suffer some other penalty) may lie. The question of legal liability in the context of the use of AI arises in a variety of different settings.
Do we need lawyers anymore?
Since some people are starting to think that AI can (apparently) conduct legal research reliably (see below however where the English Court disagrees), and can answer legal questions in an (at least apparently) competent manner, the first question in many people’s minds is whether lawyers are needed any more, or perhaps whether if they are still needed now, this is merely a function of the nascent development stages of AI, with the result that as AI gets better, lawyers will become completely redundant.
One important answer to bear in mind is that, no matter how expert AI may become, it is not, as a matter of law, a legal person capable of being sued and suing in its own name, and this central legal fact has many significant consequences. First, not being a legal person, an “AI Bot” (or whatever term one may use to refer to the AI system), cannot incur legal liability, whether civil or criminal – the legal person who created the AI Bot may incur legal liability, and so may the legal person who uses the AI Bot as a tool, but the AI Bot itself is not a legal person which can be the bearer of legal duties or rights. For this reason, accepting legal advice “from an AI Bot” is not the same as receiving advice from a lawyer, since the latter is a legal person with civil and criminal liability, bearing legal duties to avoid negligence as well as ethical duties enforced by appropriate regulators. Any lawyer who uses such tools remains the legal person liable for any loss caused by such advice if it is negligently given on the part of the lawyer.
The essential point remains that since an AI Bot is nothing more than a tool, it follows that if it is used in the world, someone is likely to be liable for its use if this use causes harm or loss to others in a legally actionable manner. In the case of a lawyer who uses AI to assist in compiling advice, it is the lawyer who remains liable for the use of this tool in his or her practice, in the same way that a lawyer must use any other resource with due care and skill – for example, a lawyer must use the latest legal sources and must check that they are correct before presenting them to their client as advice or to the Court as authority (see below for more on this point).
AI is not a legal person, but is akin to a slave under Roman Law
Some commentators have suggested an AI tool or Bot may be analysed from a legal point of view as an agent of the user, with a mandate to carry out certain actions. The problem with this approach, attractive as it may otherwise have been, is that in law the relationship of agent and principal requires two legal persons (the principal and the agent), and an AI bot is not a legal person and can therefore never be an agent in law of another legal person.
An AI bot is rather like a slave in Roman law – not a legal person but rather an item of property, owned by a master. An AI Bot shares certain prima facie features of the ancient slave, such as taking and carrying out instructions, being owned by the master as property, and also sometimes acting beyond or contrary to instructions (as AI is known to do by means of what is termed “hallucination”). The Roman Law analysis is therefore fairly helpful and apposite for analysing the liability caused by the use of an AI Bot, not because it is a binding legal set of rules in modern law, but because it represents a system (which could be imposed by statute) if it were considered workable in principle.
In Roman law a master was liable for damage caused by his slave even where the slave went mad and attacked persons in the square – clearly acting outside of his instructions, and even contrary to his usual nature. Since AI is known to hallucinate and/or to act in unpredictable ways, it may be useful to analyse the liability of the user along the lines of a Roman slave owner, i.e. broadly on the basis that the owner is liable for any harm caused by his property. After all, a person who suffers a loss as a result of the advice given by an AI Bot will want to find someone liable for that loss and we have already discussed the fact that AI is not a legal person, and so cannot attract (nor, importantly, satisfy any order to pay damages for) any such liability. The matter becomes even clearer where physical or other harm is caused by an AI Bot, as the importance of having a legal person bear criminal liability for such harm is obviously essential for the safety of the public generally.
Product liability versus ordinary rules of tort
Another issue to be considered is whether, when an AI system causes harm to a third party, the legal liability should rest with the user of the tool (as a matter of tort) or alternatively with the maker of the tool (as a matter of product liability). Where an AI tool produces an incorrect response and causes harm in some way (such as by hallucinating), it could be argued that this is a latent defect such that the maker of the tool ought to be held liable for it. On the other hand, it is well-known that AI tools may hallucinate and it would therefore be difficult to see how a user could fail to be held to have been negligent if he used the tool and didn’t check for hallucination as a reasonable person would be expected to have done. One would expect the contractual terms between the AI creator and any purchaser of the AI product to govern the issue of liability for hallucinations or other errors in comprehensive terms. No doubt this will give rise to litigation in due course. Ultimately, each case will depend on its own facts, and it will be instructive to see how the Courts develop the jurisprudence in this regard, but it is expected that the user of AI will bear the responsibility for the final work product.
Liability for presenting AI-hallucinated authorities in Court
A number of hapless legal practitioners have been found citing AI-hallucinated authorities to Courts around the world. In these instances, practitioners should bear in mind that when presenting an authority to a Court, they inescapably represent that this authority is real, and that they have checked its import and relevance to the matter at hand. If a practitioner has in fact, not so checked, he or she has engaged in a misrepresentation (and possibly a breach of professional duty or even contempt of court) as he or she knows they have not checked the case, and yet they represent that they have done so. Once again, it is important (as noted above) that a legal person bear professional and ethical responsibilities in this regard, which is the core reason why AI cannot ever replace legal (or, for that matter, medical or other) professionals.
In the recent English case of Ayinde [2025] EWHC 1383 (Admin), the Court made the following important observations (at paragraphs 7 to 9 of the judgment) concerning the use of AI for legal research, the duty of lawyers in this regard, and the liability for errors (emphasis added):
“Those who use artificial intelligence to conduct legal research, notwithstanding these risks, have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.
This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister, for example, or on information obtained from an internet search.
We would go further, however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”
In Ayinde the Court also dealt (at paragraph 26) with the liability of legal professionals for intentionally placing false materials before the Court, stating that this amounts to contempt of court:
“Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice. In R v Weisz ex p Hector Macdonald Ltd [1951] 2 KB 611 Lord Goddard CJ, Hilbery J and Devlin J held that an attempt to deceive a court by disguising the true nature of the claim by the indorsement on a writ (a claim for an unenforceable gambling debt dressed up as a claim for “an account stated”) amounted to a contempt. As to the requisite state of knowledge, mere negligence as to the falsity of the material is insufficient. There must be knowledge that it is false, or a lack of an honest belief that it is true: JSC BTA Bank v Ereschchenko [2013] EWCA Civ 829 per Lloyd LJ at [42], Newson-Smith v Al Zawawi [2017] EWHC 1876 (QB) per Whipple J at [12], Norman v Adler [2023] EWCA Civ 785 [2023] 1 WLR 4232 per Thirlwall LJ at [61].”
The Royal Court of Guernsey has not yet had occasion to deal with the above issues, but with the ubiquity of AI it is only a matter of time before such an issue comes before it in some shape or form. If you require legal advice on dealing with this sort of problem, please contact Advocate Jeremy Le Tissier, Advocate Clare Tee or Nick Taitz and we would be happy to arrange a consultation.