You are on our Ireland site.

ASAP

The Rise of AI in Legal Submissions and the Risks that Follow

The Labour Court confirms that parties remain responsible for any AI‑generated inaccuracies in their legal submissions.

By Ailbhe Marsh

As AI use grows, so too does the expectation of rigorous human oversight.

The use of artificial intelligence (‘AI’) in preparing legal submissions is becoming increasingly common. In a previous article (see here), we examined the decision in Fernando Oliveira v Ryanair DAC (ADJ-00055225), where the Workplace Relations Commission (‘WRC’) highlighted the risks of relying on unverified, AI-generated citations. Since that article, further guidance has emerged at both court and tribunal level, clarifying expectations and, importantly, reinforcing where responsibility lies.

Labour Court Guidance

In March 2026, the Labour Court issued formal guidance for parties who choose to use AI in preparing submissions.

The guidance identifies key risks, including ‘hallucinations’ (fabricated case law or legislation), confidentiality concerns where sensitive material is input into public tools and the possibility of biased, inaccurate outputs which may not be of relevance in Ireland.

At the centre of the guidance is the clear principle that responsibility cannot be outsourced to AI. All parties, whether legally represented or not, remain accountable for the material placed before the Court. Any AI-assisted documentation must be carefully reviewed, and particular caution is advised where AI is used for substantive aspects of a case.

The Labour Court also makes two important procedural points: parties may be asked to explain submissions made to the Court and any information that is incorrect or misleading, whether generated by AI or otherwise, may negatively impact the party’s case. In a rapidly evolving area, the Court has indicated that this guidance will be updated over time.

Court of Appeal: Guerin v O’Doherty

The recent decision in Guerin v O’Doherty demonstrates how these issues may emerge in practice. Although arising in defamation proceedings rather than in an employment context, the Court of Appeal addressed the use of AI in litigation in clear and practical terms.

In Guerin, the defendant used AI to prepare submissions which included references to authorities that did not exist. Critically, these citations had not been verified for either existence or relevance, nor was the use of AI disclosed to the opposing party. The result was that the plaintiff’s legal team incurred additional time and cost attempting to locate non-existent authorities.

The Court reaffirmed the fundamental obligation on parties not to mislead the Court. This extends to ensuring that submissions are not based on fictitious authorities or unsupported legal propositions. The Court emphasised that this obligation applies equally to represented and self-represented parties.

The Court set out some guiding principles in this regard:

  • Parties may use AI to assist with research, but must do so responsibly and must not, even inadvertently, mislead the Court
  • Parties should be transparent about their use of AI with both the Court and the opposing party
  • Self-represented parties are ultimately responsible for their case
  • Any party relying on AI must independently verify the accuracy of their submissions
  • No authority should be cited unless it has been checked as a genuine decision and it supports the proposition advanced

The Court was clear that a failure to carry out a verification exercise is unacceptable due to the risk of misleading the Court and bringing the administration of justice into disrepute, coupled with the imposition of an unfair burden on the opposing party due to wasted time and costs.

Recent WRC Decisions

Two recent WRC decisions demonstrate how these issues are arising in an employment law context. In Imrich Ferko v Beyond Reach Limited t/a Car Wash Crew (ADJ-00060622), the complainant accepted that his claim had been generated using AI and had not been fully reviewed. The Adjudication Officer noted that while such assistance from AI is not improper, the complainant remained responsible for the accuracy of his case. Ultimately, his inability to explain the AI-generated narrative reduced the weight attached to his evidence.

Similarly, in Desiree Goncalves v Valshan Unlimited (ADJ-00056185), the Adjudication Officer identified incorrect references consistent with AI-generated content and drew parallels between the case at hand and Oliveira.

Key Takeaway

Across the Labour Court guidance, the Court of Appeal’s decision and recent WRC case law, a consistent message has emerged. AI can be a valuable tool in preparing submissions, but it cannot replace careful human oversight. Accuracy, verification and transparency are essential, and parties who fail to meet these standards do so at real procedural risk.

Authors:

Ailbhe Marsh

Associate

Dublin

Related Topics:

AI Employment Tribunal Litigation

Related Practice Areas:

Related Products & Services:

Recent Insights

If you found this interesting, please take a look at some other recent insights from our team.

Subscribe to our Newsletter

We publish a quarterly newsletter and share details of our events. If you'd like to receive these sign up here.

For information about how we process your data, please see our privacy policy.

Want to know more about our Training services?

If you would like to know more about our Training service, please contact us today and a member of our team will be in touch directly.

For information about how we process your data, please see our privacy policy.