Unlocking the Power of AI: Crafting a Responsible Usage Policy

18th October 2023

Introduction:

One of the most exciting recent developments in technology has been the proliferation and adoption of AI technology, specifically commercially available large language models (LLMs) like ChatGPT, LLAMA 2, and Claude. These LLMs have had a massive impact across industries as wide-ranging as healthcare to fashion, disrupting existing companies and fostering widespread innovation across all levels of technical competence. While these tools have certainly proven to be impressive, they are not infallible, and using them does not come without potential risks.

The Importance of Responsible AI Use:

Having no safeguards or rules in place can mean that critical internal data could be inadvertently disclosed to a third party, or that you are at risk of erroneous output impacting a process. On the other hand, completely banning the technology can leave you at a competitive disadvantage with lower productivity due to not taking advantage of technological growth. As such, we would recommend having a responsible AI use policy to ensure proper use and maximize the total benefit your organisation can receive from the technology.

What should a good policy include? 

Download a checklist overview here.
Tailoring to your organization:

A good responsible use policy should first and foremost be tailored to your organisation and industry. A great starting point is to look at other implemented policies, such as those regarding the use of software or third-party vendors, and consider the specific requirements that would also be applicable to LLM-based tools. Depending on the method of access, such as a website or through an API, you might have multiple existing policies that would govern the implementation of LLM-based tools. While pre-existing policies might not cover the exact use case of LLM-based tools in your organisation, they can provide a great starting point for further refinement.

Understand and define the scope:

Clearly understand both the scope of who will use LLM-powered tools, as well as which specific tools they will use. Will everyone at your organization be able to use LLMs as part of their daily workload, or will it be restricted to certain teams or job functions? Who should authorize the use of specific LLM-powered tools, and will there be oversight in the process? Will you allow employees to access the tool through a web browser, or would you require access through an internal tool powered by an API? Some specialized tools, like GitHub Copilot, may be available company-wide, but the practical implementation of the tool may only be for engineering. It’s important to clearly understand the scope of the tools you would like to implement to tailor an effective policy.

Data protection:

You’ll also want to make sure that there are limits around what information can be provided to the tools. You likely do not want employees to provide either sensitive personal information or critical business knowledge into a tool that will then use that information for training purposes. Depending on the tool you are using, there can be options that keep your data private and separate from a training set, such as using an enterprise edition or direct API access. Depending on the vendor, data privacy and control can vary greatly, so reviewing the individual vendor’s policy and any potential contracts is critical. Remember, if you are not paying for a product, your data is the product.

Output controls:

You’ll also want to consider what controls will be in place surrounding the use of the outputs from the LLM-based tools. In some cases, establishing a human-in-the-loop policy can be incredibly beneficial. This means that before the output from an LLM-based tool is allowed to be integrated into information provided to external or internal stakeholders, an individual reviews the output for accuracy and applicability with the asked question. In other cases, clear disclaimer language might be added to the output, so the end user knows the results were provided by the tool. It is also likely that depending on the specific implementation of LLM-based tools in your organization, multiple output controls will be used. As a rule of thumb, the more critical a process is, the more important controls around integrating the use of LLM tools become.

Ongoing reviews:

Finally, be sure to revisit the policy as time and technology progress. We’ve seen an impressive growth in the LLM field, with multiple competitors and products offering services for varying purposes and niches. Every day there are new developments that both provide increasing opportunities, as well as increasing potential exposure to new issues. While it is important to set reasonable standards around use, treating the policy as a living document will allow you to maximize opportunities and reduce risk from emerging LLM tools.

Conclusion and Next Steps:

The use of LLM tools doesn’t come without its complexities. There can be many factors to consider in implementing an effective policy that enables the safe use of these emerging technologies. It is important to tailor an effective policy that considers your organisation’s specific situation and usage of the tools. With the right approach and proper guardrails put in place, LLM tools can serve as a substantial value-add to your organization.

Implementing effective AI technology policies is crucial in harnessing the full potential of these emerging technologies while managing associated risks. Craft a tailored policy that aligns with your organisation’s unique needs and technology usage. With the right strategy and safeguards in place, AI tools can significantly benefit your organization. Reach out to Zeidler Group today for expert guidance in drafting a custom policy that suits your requirements.

Author

Alex Mercer