They took our jobs! The law and AI — are you ready for the robot revolution?
Welcome to What Matters — part two in our series of topics that aim to put some of the critical pieces together for lawyers. This month, we explore the rise and rise of artificial intelligence: how it is already impacting the legal practice and what the future for human lawyers might look like. A human wrote this article. We trust you’ll take our word for it 🦾 😉
The notion of having a human-like conversation with a machine has been around in science fiction since 1872. From Star Wars, Minority Report and Black Mirror to Siri and Alexa, we’ve been gaining familiarity with the idea of working alongside AI
Yet when generative AI stormed onto the scene in late-2022, this latest real-life promise that talking to AI could feel more like interacting with a human captured the world’s imagination, fear, and loathing (again) about the robot age. Tools like machine learning and natural language processing quietly manage our Netflix suggestions, translate web pages, and filter spam emails. But the arrival of ChatGPT shifted things. With no specialised training, data science knowledge or programming skills needed, anyone (and everyone) could interact with ChatGPT.
Learn more about generative AI and Large Language Models ➡️
We prompted AI to morph lawyers (from The Rainmaker) into robots (from Starwars). As for the results, it’s not exactly what we imagined, but, you’re welcome.
The robots are coming!
The underlying technology, which is trained to follow instructions in a prompt and provide a detailed response, lays the foundation for more software that could “take away our jobs”. Cue hype and misconceptions. Yet the possibilities to transform how we work are worth exploring. With increasingly accurate, secure and commercial AI platforms, could AI remove menial, time-consuming tasks from our to-dos, leaving humans free to exercise creativity and specialised knowledge? Or is an omnipresent “robot revolution” actually on the way?
At Sky Discovery, AI is not a new technology for us. It’s integrated into our eDiscovery tools and approaches. Yet we’re always curious to understand the future possibilities and impacts of this technology for legal firms and lawyers (our clients). As we innovate with AI and continue to learn from its presence in our workflows, we’re thinking about what matters. How might generative AI improve the ways we manage data volumes, legal input, and emerging data sources? What tasks make sense to keep “human” and what frameworks do we need to leverage technology for the rest?
Here, we explore AI in the legal realm and share our process for managing privacy, ethics, and verification.
Isn’t the law just different?
AI has been used in legal applications for longer than you might realise. For example, AI powered document review is well-established and court-approved in many jurisdictions. Yet “knowledge professions” such as the law have not always felt the full force of emerging technologies — due to the specialised nature of the work, regulatory and ethical obligations, and the concept of the billable hour.
However, generative AI has caused more of a stir within the traditional knowledge professions than previous AI advancements. AI can now generate texts in different styles and from different viewpoints. Meaning platforms can mimic legal language to draft memos and letters, summarise and cite long legal documents, carry out document review exercises, and assist with legal research.
US start-up Harvey has built its AI offering with legal domain knowledge to optimise the experience for lawyers — and it’s gaining traction with some of the world’s largest firms. We’ve also seen leading law firms like Ashurst report on trials of AI within their practice. Although it’s still early days for wider law firm adoption of AI, initial signs are promising.
Full speed ahead?
It’s not all positive news for AI. In the well-publicised US case Mata v. Avianca, filings referencing “fake” cases led to sanctions and global humiliation for a US lawyer who used ChatGPT as a search engine to find relevant case law. Despite being reassured by ChatGPT that all cited cases were real, they were in fact fabricated by ChatGPT — kicking off a string of similar occasions where “hallucinated” information was submitted to courts. The issue of hallucinations (the technical term for AI confidently providing factually incorrect information) has been a key argument against the use of AI in legal work. Various techniques for reducing hallucinations (such as grounding or RAG) have been used to improve the accuracy of AI responses, yet concerns about the reliability of AI in legal work remain.
In an industry where the billable hour is king, there are also questions around how law firms can use AI without jeopardising their revenue models. If AI can help complete certain tasks more quickly, then the hours billed for that task will decline. Access to secure, private AI platforms isn’t free, so law firms need to balance the expense of the AI systems against the benefits they hope to gain from it. At present, using generative AI tools can be costly and while the direct cost of the model use appears to be declining, AI- powered tools for law firms will likely increase prices to recover their early investment in this technology. Further, concerns about power usage and chip shortages mean that the overall costs of AI are still unclear. As always, there are no silver bullets here.
What matters (from our perspective)
At Sky Discovery, our core values are centred on being a great place to work, providing outstanding service to great clients, and making eDiscovery better. Our focus on innovation is central to all three of these values. So we’re comfortable with the experimental nature of getting to know emerging technologies. When new applications like ChatGPT gain traction, we’re curious how our eDiscovery toolkit might shift, so that we can keep improving how we manage and interrogate large volumes of data efficiently, defensibly, and cost-effectively.
For us, AI is a powerful extension of our market-leading innovation culture. We approach it as a tool — one we’ve been thoroughly exploring how to implement into our offering for years. Our growing AI capabilities range from context-based document queries and fact-checking the validity of stated references to organising, linking, and tracking the sequence of events. And we’ve been running client partner workshops to uncover additional use cases for GenAI in eDiscovery.
Security, accuracy, and knowing what actually matters, as always, are paramount. We’re truly excited to keep exploring this “half robot, half lawyer” paradigm where we can leverage GenAI for time-cost efficiencies in eDiscovery. While questions remain, measured optimism has led us to some initial insights.
Find a use case
Before investing in AI technology — as with any legal technology — have a good understanding of where it may add value for your practice. Onboarding AI without a strategy as to how to implement the tools, protect client data, and track ROI, will result in AI being sidelined by lawyers who don’t know how to use it effectively. Like any new software embedded into a workflow, we need to allocate time to understanding how it works and how to use it efficiently. Learning by doing or allocating specific training time to build fluency will ensure use will become muscle memory soon enough.
Trust but verify
Guidelines for courts from the UK Solicitors Regulation Authority, Law Society, Queensland Courts and Supreme Court of Victoria stress that work produced using AI must be stringently fact-checked. While AI is making steps to becoming more reliable in legal use cases, it’s always important to verify any factual information generated by AI. A cautious “trust but verify” approach needs to be built into the “people–process–technology” triangle to ensure that the workflows around AI and the training provided to the legal teams are always stressing the importance of validation.
Data security and privacy
Maintaining confidentiality and privilege of client data is paramount in the law, and using public tools such as ChatGPT is ill-advised in an enterprise setting. Finding ways to use AI in a secure and monitored way is critical. Whether using Microsoft Co-Pilot in a Microsoft 365 environment, licensing other tools that protect sensitive data or building AI in-house, close collaboration between lawyers, IT, compliance and risk management is essential to establishing secure frameworks for AI.
Ethical approaches to AI
The ethics of AI are hotly debated, and no implementation of AI technology should be green-lit without considering issues such as accountability, bias, transparency, and compliance. For example, if an AI system is used to make automated decisions, having a process whereby an individual can review and challenge those decisions can improve accountability. It’s worth considering whether you need to disclose the use of AI to your clients, and whether you can explain to clients how their data is confidential.
Another key issue in relation to AI is whether bias is encoded into the models, and how this bias can be mitigated. Systematic and regular benchmarking of AI, human auditing of AI results, and an understanding of how bias can be introduced and propagated through AI systems will be key discussion points for the foreseeable future. In addition to technological mitigation for bias, regulatory efforts such as the proposed Australian regulatory guideline and EU AI Act will be addressing and monitoring the use of AI in high-risk applications such as the law.
AI might be here to automate, but “it’s not automagical.”
Avoiding AI isn’t the answer. Naively using it isn’t either. While innovation in language processing is happening at the pace of a supercomputer, the hype, reality and misconceptions about AI technologies are significant. ChatGPT has laid the foundation for even more software that could take away our jobs, starting with more menial tasks that one could argue humans are not designed for anyway.
Lawyers need to produce strategic, logical, defensible work and are time-poor as it is with human-generated material alone. Why wouldn’t you consider being part lawyer, part robot? Using AI intelligently will likely be the only way to keep pace with and counter the volume of noise, content, and data generated by the proliferation of prompt-happy public experimentation.
Either way, as McKinsey & Co says, you need a strategy and system to set your firm up for success. Let us know if we can help you with that 💅🏼
References
We’ve gathered some resources with constructive insights and guidelines for implementing AI within law firms and the legal technology sector:
The Australasian Institute for Judicial Administration published an in-depth report in 2023 on AI decision-making for judges, tribunal members and court administrators, while theInternational Standards Organization advises on how to manage risk.
Ashurst shares what it learned from running generative AI trials across 23 offices in 14 countries, including the ability to overcome blank page syndrome.
Deloitte surveyed senior legal leadership from 43 of its largest clients to understand GenAI’s benefits, barriers to adoption, and impact on the legal ecosystem.
Legal sector analyst Jordan Furlong speculates on how the shape of practising law will change with AI— more thorough work, better client relationships, and making legal support more accessible to the people who need it.
Peter Duffy’s newsletter Legal Tech Trends does what it says on the tin, unpacking legal tech and AI insights and developments every few weeks. While The Brainyacts drops tips every couple of days.
MinterEllison’s AI lead Sam Burrett, iManage legal practice lead Jack Shepherd, and tech lawyer Raymond Sun all write on AI for legal professionals.
The Centre for Legal Innovation has aired more than 100 episodes on AI and the law.
What Matters is a bi-monthly newsletter providing helpful insights and ways to win for lawyers and their teams. We believe it’s one email worth reading — and we’ll never send more than six (6) per year. In our next edition, we’re exploring burnout 🥵 See you in two (2) months 👋