The UK Government has a stated objective to make the UK one of the top places in the world to build AI companies.
Contents

In March 2023 it published a white paper setting its proposals on how it should regulate the use of artificial intelligence (AI) to ensure the UK can “create the right environment to harness the benefits of AI and remain at the forefront of technological developments … [while ensuring] the risks posed by AI can be addressed.” In March there were also high-profile public requests for developments of powerful AI solutions – such as ChatGPT and Bard - to be paused for at least six months to ensure that the impact of AI can be carefully “planned for and managed”.

Against this backdrop, Audit Committees are looking to their internal audit and technology risk functions to provide assurance that the risks associated with AI solutions are being appropriately managed.

What is AI and how are businesses using it?

Over the past 15-20 years we’ve seen a number of technological advancements impacting how organisations operate, disrupting the marketplace and creating new opportunities. AI is poised to be more disruptive than anything before it and many organisations are exploring how they can use AI to gain a competitive advantage and streamline their business processes.

Recently Google and OpenAI (heavily backed by Microsoft) launched Bard and ChatGPT, AI-powered chatbots that simulate human conversations and are capable of answering questions on almost any subject using natural language processing and machine learning. They can provide answers to fact-based questions (eg, who is the King of Norway?) through to more complex process-driven questions (eg, what are the steps I need to go through to plan and run a conference for 2000 people) and to creative questions (eg, tell me a short story about a pony).

John McCarthy, regarded by many as the ‘father of artificial intelligence’ stated that AI is simply “the science of making intelligent machines.” Today AI is a broad term that refers to everything from robotic process automation (RPA) to augmented intelligence solutions and cognitive decision making performed by software robots (bots).  Many organisations are investing in AI research and technologies – such as neural networks, big data processing, data mining, robotics, machine learning, image recognition – to assist them in analysing data or making complex decisions in a manner that is faster and more accurate than humans. This can increase efficiencies and eliminate the need for more menial human tasks.

Examples of where businesses are deploying AI include:

  • Automotive manufacturers developing self-driving vehicles
  • Web chat bots as a first point of contact for their customers
  • Using AI tools to filter CVs of potential candidates before being offered an interview
  • Hotels using AI to make real time pricing updates to their rooms and prove ‘in-person’ customer services
  • Logistics companies using AI tools to plan, and update, optimal route deliveries in real time
  • Pharmaceutical organisations using AI to research drugs and diagnose illnesses in a fraction of the time it normally takes
  • Retail stores using AI to automate elements of their supply chain and core financial processes

The challenges and risks associated with AI

Although AI provides an array of opportunities for businesses, there are a number of challenges that arise when implementing AI solutions. Audit Committees are increasingly looking to internal audit and technology risk functions to provide assurance that the risks associated with such strategies and solutions are being appropriately managed.

Key risk areas to consider include:

Icon of checklistStrategy

With all the excitement around AI, organisations risk implementing AI solutions without a formal strategy and a valid business case, or don’t serve the organisation’s business objectives, ultimately wasting their resources.

Icon of a watchSpeed

Many organisations are rushing to adopt AI without first assessing the risks involved or controls required and currently there are very few frameworks or AI-specific regulations in place.

Icon of a head with a cog insideKnowledge

AI and its associated technologies are still relatively new and skills to design, implement, and manage AI solutions are scarce. Furthermore, employees will need to be trained to use the new AI solutions or redeployed into other areas of the business where their roles are made redundant by an AI solution.

Icon representing coding and dataData

Fundamentally, AI systems rely on data. If the quality of the data is poor, or stored in separate and isolated systems, organisations may struggle to implement solutions to extract the value from their investment into AI.

Icon demonstrating data and technologyComplexity

AI systems are inherently complex, relying on layers of code with multiple inputs and (at their purest) can learn and adapt without human intervention. If AI developments and solutions aren’t properly managed and monitored, they may make inaccurate (or even biased) decisions.

Icon of risk sheildGovernance

A lack of adequate governance arrangements over the development and deployment of AI may result in key business processes being undermined, breaches of company policy or delegated authorities, and unintended biases.

Icon of ethical scales of justiceEthics

There still remain a number of ethical concerns about using AI to make business decisions. Despite the shift in the public’s attitude towards using AI, some would still feel uncomfortable about being subject to this themselves. Individuals’ rights under regulation, such as GDPR, also need to be taken into account before a business adopts AI.

Looking forward: the growth of AI and challenge for internal audit and technology risk functions

The use of AI is expected to grow significantly over the coming years, with the global AI market size estimated to grow USD 1, 591 billion by 2030 (from USD 120 billion in 2022) (Precedence research). Research by IBM in 2022 indicated that 35% of companies were already using AI in their business, with an additional 42% stating they were exploring how to use AI,  with adoption rates expected to only increase over the coming years.

Against this backdrop, audit committees are starting to charge their internal audit and technology risk teams to provide assurance over the risks associated with their organisation’s use of AI; the design, performance, monitoring and governance of AI based processes; and the programmes to implement AI solutions.

Assurance functions need to prepare themselves for how and when their business will deploy, or are already deploying, AI solutions. This will involve gaining an understanding of how AI operates and upskilling themselves so that they can provide advice and assurance to the business on the challenges and risks that AI brings. Given how quickly AI is evolving it will be critical that they can keep pace with developments in AI and continue to provide meaningful assurance.

This is posing a real challenge for internal audit and technology risk functions as many aren’t well equipped to provide the assurance that their audit committees and board are looking for. This is even more critical now, given that a number of organisations are only just starting to embark on their AI journey.

Trends in technology risks 2023
Trends in technology risks 2023
Read this article

An approach to auditing AI

Internal audit and technology risk teams need to adopt an approach to auditing AI that provides assurance that the risks associated with this are being managed while not hindering their business’s ability to realise the opportunities that AI presents. Areas that internal audit teams may wish to consider are:

Icon of risk shieldStrategy and governance

It’s key that an organisation’s use of AI has a valid business case and strategy, and supports the wider business’ corporate objectives strategy and corporate objectives, and the programmes to roll out of AI are appropriately governed. It’s equally important that organisations’ boards and/or senior management teams implement appropriate structures, processes and procedures to direct, manage and monitor the use of AI across the organisation. As part of this, adequate communication plans should be in place to ensure stakeholders are made aware of how the organisation intends to use AI and plans should be in place to help re-train employees whose roles will be disrupted (or replaced) by AI solutions. Arrangements should also be in place to respond to feedback, questions and complaints from customers regarding how their data is subject to AI processing.

Icon of an eyeAI technology control environment

All AI solutions and the data they rely upon reside in infrastructure – whether that’s hosted internally or externally in the cloud. This infrastructure needs to be properly managed, maintained and secured. In addition, the AI solutions themselves need to be appropriately administered so that unauthorised individuals don’t obtain access to them. The teams that support the AI infrastructure and solutions also need to be provided training to fill any AI skills gaps. In addition, any third parties involved in hosting or supporting AI infrastructure or solutions need to be proactively managed.

Icon of a document with a risk shield overlayingData governance and quality

All AI is underpinned by data. If the quality of this data is poor, the data isn’t sufficiently maintained, or disparate systems holding data aren’t well integrated, this could lead to difficulties in implementing an AI solution or invalid decisions being made based on the AI output. To enable the implementation of an AI solution, most organisations will need to transform their data environment. Such ‘big data’ strategies will need to be carefully planned, aligned to the business objectives and monitored to ensure that they deliver the intended value.

Icon of a graph with a steady increaseAI model development, monitoring and maintenance

The development of AI solutions must be formally managed and controlled, even in Agile environments, to ensure they deliver the intended results and meet the required quality levels. Process mapping exercises should be performed, formally reviewed and approved as part of these processes, to ensure the models are truly aligned to business processes and support company policies. Where possible, pilot studies should be performed to assess the impacts of AI before a full rollout. Exception-handling routines should be established to manage situations when an AI model’s activities fall outside of the expected operating parameters In addition, where possible, natural language processing should be used to make the solution appear human to the user. Access to source code and development arrangements should be carefully administered to help prevent inappropriate changes from being made.

Icon of team membersEthics and human bias

AI developers should pay careful attention to this when developing and testing solutions (including the data and models underpinning these) to ensure they’re free from human bias. The ethics of using AI to help make decisions should be considered before adopting a solution, and all AI decisions should be fully transparent and auditable. Furthermore, all those who are subject to AI processes should be made aware of this in accordance with the GDPR, and their data shouldn’t be processed, by AI or otherwise, for any other reasons than for the purpose it was originally obtained for.

The use of AI by businesses is only expected to increase over the coming years If internal audit functions don’t start upskilling themselves and acting now, their organisation may not be adequately prepared to manage the risks, and realise the opportunities, associated with AI. Alternatively they may find their organisations are some way down their AI journey before they realise that they have set off in the wrong direction without laying the foundations in place.

To further discuss the risks associated with AI and how your internal audit and second line technology risks teams can provide assurance over these, get in touch with James Durrant.