Bridging Technology and Law: Examining the Latest Trends in Generative AI within the UK Legal Field

3rd April 2024

Introduction

 In 2023, the Law Society of the United Kingdom (the “UK”), the UK Solicitors Regulation Authority as well as the UK Government released statements and guidance in relation to the use of Artificial Intelligence in the legal field. 

The released statements and guidance do not impose any rules or restrictions on the use of Artificial Intelligence (“AI”), however, they provide helpful guidance and considerations that could be implemented into risk management, as well as data protection policy and IT policy considerations. 

Overview of legal and regulatory developments for the legal sector in the United Kingdom 

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. Different sectoral regulators are to implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30 April 2024. 

In March 2023, the UK Department for Science, Innovation and Technology and the Office for Artificial Intelligence released a Policy Paper titled “Pro-innovation approach to AI regulation”. 

Further, in December 2023, the UK Law Society, and the UK Solicitors Regulation Authority (the “UK SRA”) released statements and guidance in relation to the use of AI.  

We provide an overview of the respective publications below. 

UK Government’s position on AI Regulation  

The UK Department for Science, Innovation & Technology and the Office of A.I. (collectively referred to as the “UK Government”) has introduced a comprehensive policy paper titled “A Pro-Innovation Approach to AI Regulation” (the “White Paper) which was presented to the UK Parliament on 29 March 2023. 

This White Paper stated the UK’s stance on regulating AI, acknowledging the significant advancements and efficiencies AI has brought to various sectors, such as traffic monitoring and fraud detection in banking systems. Alongside these benefits, the UK Government acknowledges the inherent risks associated with AI, some stemming from unintended consequences and the absence of adequate controls to ensure responsible AI deployment. 

The White Paper outlines three primary objectives for the UK’s AI regulatory framework, aimed at fostering clarity, collaboration among government, regulators, and industry, and unlocking innovation: 

  • Drive Growth and Prosperity: The regulatory framework seeks to stimulate economic growth and prosperity by facilitating AI innovation. 
  • Increase Public Trust in AI: Building public trust is deemed crucial, necessitating measures to ensure transparency, accountability, and fairness in AI systems. 
  • Strengthen the UK’s Global Leadership in AI: By fostering an enabling environment for AI development, the UK aims to bolster its position as a global leader in AI technology. 

The UK Government believes that the way to achieve these objectives outlined above is to have a flexible, principle-based framework. The White Paper states that this approach would ‘better strike the balance between providing clarity, building trust and enabling experimentation’. 

The key principles at the heart of the proposed framework are: 

  • Safety, Security, and Robustness: Ensuring that AI systems are safe, secure, and robust to mitigate potential risks. 
  • Appropriate Transparency and Explainability: Promoting transparency and explainability to enhance understanding and trust in AI systems. 
  • Fairness: Upholding principles of fairness to prevent biases and discrimination in AI decision-making processes. 
  • Accountability and Governance: Establishing mechanisms for accountability and governance to address the ethical and legal implications of AI deployment. 
  • Contestability and Redress: Providing avenues for contestation and redress in cases of AI-related disputes or harm. 

The UK Government proposes leveraging existing regulatory bodies to implement the regulatory framework based on these principles. Initially, the framework will operate on a non-statutory basis within the purview of regulators’ existing responsibilities and mandates. The effectiveness of this non-statutory framework will be closely monitored, with considerations given to potentially introducing new statutory duties or broader legislative changes to further reinforce AI regulation in the future. 

As proposed in the White Paper, the government will set up a new central function to monitor and assess AI risks across the economy, support regulator coordination and address potential regulatory gaps. This will be supported by a new steering committee, including regulator representatives. The government has also conducted targeted consultations on its AI risk register and will continue to assess the regulatory framework. 

The Law Society of the United Kingdom – Generative AI Guide  

 On 17 November 2023, the Law Society of the UK has published a guide concerning the use of Generative Artificial Intelligence (“Generative AI”) in the legal sector. This is a non-binding document and only provides guidance. It does not impose any rules or obligations. 

 The Guide includes an introduction to Generative AI for law firms and solicitors, concerning the possible usages, matters to consider when using Generative AI as well as risk management. The Guide provides guidance for solicitors and firms, particularly small and medium-sized firms. Aspects of the Guidance are also relevant for in-house solicitors. It provides a broad overview of both the opportunities and risks the legal profession should be aware of to make more informed decisions when deciding whether and how generative AI technologies might be used. 

Checklist when considering use of Generative AI

Section 1.4 of the Guide provides a comprehensive checklist, which points shall be considered when considering the use of Generative AI: 

  •  Define the purpose and use cases of the generative AI tool, 
  • Outline the desired outcome of using the generative AI tool, 
  • Follow professional obligations under the SRA Code of Conduct, Standards and Regulations, and Principles, 
  • Adhere to wider policies related to IT, AI, confidentiality and data governance, 
  • Review the generative AI vendor’s data management, security and standards, 
  • Establish rights over generative AI prompts, training data and outputs, 
  • Establish whether the generative AI tool is a closed system within your firm’s boundaries or also operates as a training model for third parties, 
  • Discuss expectations regarding the use of generative AI tools for the delivery of legal services between you and the client, 
  • Consider what input data you are likely to use and whether it is appropriate to put it into the generative AI tool, 
  • Identify and manage the risks related to confidentiality, intellectual property, data protection, cybersecurity and ethics, 
  • Establish the liability and insurance coverage related to generative AI use and the use of outputs in your practice, 
  • Document inputs, outputs, and any errors of the generative AI tool if this is not automatically collected and stored, 
  • Review generative AI outputs for accuracy and factual correctness, including mitigating biases and fact checking. 

Guidelines for Effective Risk Management with Generative AI Integration

 Section 3 of the Guide provides a helpful overview of which matters should be considered where a solicitor considers using Generative AI. The Law Society of the UK also emphasises in the Guide that the principles of the UK SRA Standard and Regulations continue to apply. 

 The Guide outlines that considerations need to be made regarding the use of Generative AI in relation to the business and organisational needs of a firm, including the responsibilities of and to the board and key affected stakeholders. In addition to an effective risk management, this could include the following points: 

  • Business alignment, 
  • Purpose and scope, 
  • Stakeholder communications, 
  • Cost and ROI analysis, 
  • Pricing, 
  • Ongoing training. 

Legal implications 

According to the Guide, the following fields of law should be considered when using Generative AI: 

  • Intellectual property, 
  • Data protection and privacy, 
  • Cybersecurity, 
  • Ethical considerations (compliance, lawfulness, capability; transparency; and accountability).  

UK Solicitors Regulation Authority Risk Outlook report: The use of artificial intelligence in the legal market 

On 20 November 2023, the SRA published a report on the use of AI in the legal market. This does not impose any rules or obligations.  

The report discusses the impact of AI in the legal sector emphasising its increasing accessibility to small and medium firms. This increase utilisation has had a positive impact on work performance and conditions, offering cost and speed benefits especially for financially constrained firms. However, concerns remain around system selection and regulatory obligations, leading some firms to hesitate in adopting AI. The SRA emphasizes the importance of balancing technological adoption with maintaining public protection.  

 Two key distinctions between AI and traditional IT systems are highlighted: 

 Adaptivity: AI can generate outcomes that are not explicitly programmed, making it challenging to explain decision-making processes.  

 Autonomy: Some AI systems can operate autonomously, which raises questions about accountability and responsibility for their outputs.   

Risk management 

 The report brings forth the risks associated with AI in legal processes. An example is provided where AI inaccurately drafted a legal document by citing non-existent cases in relation to the risk of inaccuracy. To mitigate these risks, firms are urged to prioritise ethical AI development and implement protection measures.  

 The SRA has proposed five principles to manage AI risks effectively: 

  •  Safety, security, and robustness: The AI system must be chosen carefully to make sure it will meet the company’s needs. The staff must be properly trained in what is and is not acceptable as well as how to use the AI effectively.  
  •  Transparency and explainability: Clients must be informed on the fact that an AI system will be used with their cases and staff must be informed on who can access the system and make decisions.  
  •  Fairness: The UK General Data Protection Regulation rules must be complied with, and the firm must ensure that the personal data being processed is what can reasonably be expected.  
  •  Accountability and governance: The staff remain responsible for the firms’ activities and cannot solely rely on the AI. There should be a process in place to check and verify the information or content that is being produced by the AI.  
  •  Contestability and redress: Clients as well as staff should be able to contest AI decisions they disagree with.  

How can Zeidler assist?

If you have any questions or require support, the Zeidler Legal Team is here to help. Our global team of professionals remains up to date on the latest legal, regulatory and compliance changes concerning the use of Artificial Intelligence as well as the general legal, regulatory and compliance changes affecting the asset management industry.  If you require additional information or assistance, please get in touch with us.

Co-Author

Lynn Lyoba

Co-Author

Patricia Nitschke