thoughtful judge holding eyeglasses listen to prosecutor standing near attorney in court

By Ryan Zahrai from Zed Law

Artificial intelligence is becoming part of day-to-day legal work in Australia, but its use in courtrooms is raising new and urgent questions.

In recent months, AI-generated content has been at the centre of serious professional missteps. A solicitor in Western Australia submitted documents with entirely fabricated citations (produced by AI tools Claude and Microsoft Copilot)and was referred to the Legal Practice Board, with the Federal Court awarding costs against him. In Victoria, a King’s Counsel included fictional judgments in a murder trial, prompting delays and judicial criticism. In a separate Federal Court case, a law firm faced indemnity costs after filing submissions with false citations sourced via generative AI. 

These incidents have triggered a formal response from the courts, with jurisdictions in NSW, Victoria and Queensland now issuing binding guidelines requiring disclosure, verification and, in some cases, pre-approval before any AI-generated content is submitted.

The guidelines aren’t a rejection of AI, they are a response to its misuse. And it signals a broader shift from innovation for convenience to innovation with accountability.

Drawing the line: What AI can and cannot do

Most lawyers are already using AI in some form, whether it’s document summarisation, early-stage drafting or automating client intake. The tools aren’t new. What’s new is the need for guardrails.

The profession must now be clear on where AI fits, and where it doesn’t. At Zed Law, we use AI in a tightly scoped capacity: to support internal research, surface patterns in contract terms, and streamline high-volume administrative work. We’ve also cash-invested in what we think is one of the best AI legal tools in the market which we also offer to clients. But it’s never used in isolation. It doesn’t draft submissions. It doesn’t generate evidence. And we don’t treat AI output as client-ready. It’s a starting point, not an endpoint.

The Supreme Court of Victoria’s Guidelines for Litigants outlines that AI-generated material must reflect actual knowledge, remain transparent in its use, and must not mislead the court. That guidance reflects a growing consensus. The role of AI in legal work should be clearly defined and tightly controlled.

If you wouldn’t rely on a junior lawyer to produce it without review, you shouldn’t rely on AI either. Human lawyer assurance is key. 

Why disclosure isn’t optional

The courts are right to draw a hard line on disclosure. If AI tools are used in preparing legal material – whether for court, for regulators or for clients – that use should be declared. More importantly, the outputs must be verified. Not glanced over. Verified properly.

Lawyers remain personally responsible for every word that leaves their desk. That includes anything first drafted by a machine. Clients rely on our work to make decisions that carry risk, exposure and reputational consequences. Courts rely on it to ensure procedural fairness. That trust falls apart the moment content is submitted without proper scrutiny.

The Supreme Court of NSW has made this explicit. Practice Note SC Gen 23 requires lawyers to verify the existence, accuracy and relevance of any authorities generated with the assistance of AI. It reinforces what should already be understood. AI may assist with drafting, but it can’t replace professional responsibility.

Disclosure isn’t about process for the sake of it. It’s about making clear who is responsible, and who isn’t. AI doesn’t carry legal liability. If it makes a mistake, the consequences sit with the lawyer of record. That accountability can’t be outsourced.

The minefield ahead: Risk, trust and professional reputation

The profession is now navigating a minefield. Tools that offer efficiency and speed can, if used poorly, create the opposite in delays, confusion, and long-term reputational damage.

Firms need to acknowledge this and adapt. That means reviewing internal policy. Who is using AI, and for what? What layers of review are in place? Are junior lawyers being trained on how to use these tools responsibly, or are they just being told,”to get it done faster”?

And are the tools fit for purpose? Are they just increasing efficiency by a few percentage points, or are they really addressing pain points for the firm and thereby their clients?

It also means understanding that the risk extends beyond court proceedings. A poorly verified AI-generated contract clause can lead to litigation. An inaccurate advice note can expose clients to tax or regulatory issues. The real risk isn’t always immediate – it’s downstream.

Regulating the profession, not just the platform

It’s tempting to place the burden of risk on the tools themselves. But tools don’t make decisions. People do. Which is why the responsibility must remain with the lawyer, not the platform.

Australia’s courts are acting quickly, but the broader profession must follow with equally serious investment in education and internal safeguards. We need:

  • Clear ethical guidance on the use of AI in legal work
  • Mandatory training on the capabilities and limitations of large language models
  • Policy updates to reflect AI use in practice management systems and client communications

Law schools and professional development bodies also have a role to play. AI is not a theoretical topic or a future concern. It is embedded in the workflow now, and the legal curriculum needs to reflect that.

From innovation to accountability

AI isn’t the problem. Complacency is.

Used well, AI will make legal services more efficient and accessible. It will help lawyers focus on higher-value work. It will support faster decision-making. But none of that matters if it undermines the core reason clients engage lawyers in the first place: trust.

The next phase of innovation in law won’t be defined by what tools we adopt. It will be defined by how responsibly we use them.

That means putting ethics before expedience. It means owning the output, regardless of how it was generated. And it means recognising that the future of the profession depends not just on what we build, but on whether we as lawyers remain trusted to use and verify it.

Ryan Zahrai