The use of artificial intelligence has soared in large businesses. How can it be used by assurance functions to add value to risk and audit operations without compromising security and confidentiality? Alex Hunt explains five potential applications.
Contents

Artificial intelligence (AI) can be a huge asset in risk and assurance: increasing accuracy and reliability in generating insights, streamlining processes to increase productivity, and enhancing quality. It can add value to your organisation by creating new ways of working and opening growth opportunities.

The potential applications for using AI in your own operations depend on your specific needs and business context. The key concern, however, will always be security of the information used by AI tools and we have noted our precautions below. As the full risks of AI are still being understood, it's up to every organisation to assess their own possible exposures.

1 Natural language processing in data analysis  

Data can have various types and formats, such as written language in documents, tables, spreadsheets, or spoken language, recordings or transcripts. It can be hard to gather different kinds of data and examine them effectively every day for many people. By using powerful technologies, natural language processing (NLP) can process diverse and complex data formats and produce useful inputs for AI tools to use. NLP can help increase business value by enhancing the quality of reviews and audits, by organising, summarising, and analysing data. 

There are several ways to apply intelligent document analysis techniques using NLP, including: 

Gathering information from large volumes of documents

With optical character recognition (OCR), large amounts of documents such as contracts, invoices, and reports can be transformed from scanned images to text and sorted by NLP based on specified topics and criteria. Unstructured data like emails, images, and texts can be easily handled without manually reviewing the documents. The information extracted is then used to verify the correctness and frequency of transactions under review without spending a lot of time on document examination.

Comparison of documents to detect similarities and duplication

NLP can also help to find similarities in documents. It can mark similarities and possible duplicates based on the key features it identifies. This makes it easier to review documents and select corroborating files for audit evidence.

Ask questions over your data sources

Large language models (LLMs) are a form of AI that can imitate human intelligence, can handle large amounts of data such as transaction history, recorded meetings or inventory levels. They are useful when data is unstructured i.e. plain text or media posts. AI tools can then let you query your data and give an instant answer. The incorporation of these kinds of insights from NLP analysis into risk management processes, with human supervision, can enable prompt actions and improve your assurance capability.

2 Predictive analytics

Predictive analytics uses machine learning to extract useful trends, patterns, and behaviour from historical datasets. It can provide deep insights into key risk indicators. With data mining and statistics, predictive analytics can help with risk assessment and testing of controls. It can also reveal current and future risks, and help prevent major problems before they happen.

Risk and assurance teams can leverage this capability to fully optimise the use of data by incorporating predictive analytics in the assessment of fraud. Combining the function to process large volumes of unstructured data with machine learning can detect unusual patterns and behaviours to identify fraud. This can help determine unusual behaviours and suspicious risk profiles from extensive data sets to trigger investigation.

We've developed a flexible solution embedded with predictive analytics for critical business processes.

Watch the video to see how we can help

The video is playing. This video is playing in mini-player mode.

3 Risk intelligence

AI can improve the quality and precision of vital risk information by analysing data automatically and extensively. AI can assist risk and assurance teams by collecting and integrating data from internal and external sources, providing a wider perspective of the whole enterprise.

These are some of the best ways that AI can be used to develop risk intelligence.

Threat analysis

In addition to using machine learning for adverse event prediction as mentioned above, large amounts of data from different sources can be used to describe critical adverse situations that can harm the organisation. These sources could include social media, news articles and cyber alerts. Companies can use AI to rapidly detect possible threats and examine patterns and trends in the data.

Risk score analysis

Contextual data can help with comparing risks and ranking them by importance. Companies can use AI to give scores to the possible threats that they identify, and use these scores to decide which risks need more attention and resources.

Stay informed on the key risks in technology audits this year and understand how your business can manage them.
Trends in technology risks 2024
Read this article

4 Researching content for audit programmes

Developing an audit programme is a crucial part of any audit engagement. By using Large Language Models (LLMs), such as ChatGPT or Microsoft Co-Pilot, the process can be more efficient - reducing time and resources by automating tasks. Auditors can use it for research and to enhance their audit plans to suit their particular scope and objectives. However, this is only a preliminary step, co-source expertise and SME knowledge is needed for customisation and best practice.

ChatGPT can produce a simple P2P control audit plan that covers P2P control audit goals, range, risk analysis, control review, testing methods, results, suggestions, reporting, and follow-up actions. The format ensures a comprehensive control assessment but will need adjustment for different organisations.

Benefits of using LLMs to assist with audit programme development

  • Provide valuable learning opportunities for less experienced auditors – junior auditors can leverage AI tools to save time and expedite knowledge acquisition, enhancing their skill sets
  • Access to information, guidance, and support auditors’ professional development and expertise in the auditing field

Precautions: auditors need to be mindful of a few cautionary points while using AI solutions

  • Public AI uses the data provided by users for further learnings – so, auditors should be cautious not to provide sensitive corporate or client information
  • If AI solutions are implemented in-house, auditing functions need to ensure that the systems are trained well using adequate datasets and that the training is continuous
  • Review and adjust AI-generated audit programmes to align with the specific goals and scope of the audit
  • Exercise due diligence in using LLM tools for internal audit, avoiding negligence in favour of convenience

5 Reviewing code

By using AI, auditors can examine computer code faster and more reliably than by checking it manually. This is useful for internal audits of important processes or automated controls that involve large or complicated code repositories. AI can support in more ways, such as:

Analysing complex code for potential weaknesses

AI tools can automatically scan and analyse code to find security issues, mistakes, or detect patterns that indicate potential issues. This helps auditors to efficiently address weaknesses, making their work more productive, allowing them to focus on more important tasks.

Compare coding with internal standards and good practice

AI can verify code against coding rules, laws, or internal policies to uphold compliance. By doing this regularly, AI enhances the precision of detecting coding defects, risks, and compliance matters with the code. This is beneficial for complex or challenging codes, as it helps auditors efficiently spot potential security problems, coding errors, or deviations from compliance requirements.

Continuous monitoring

AI-enabled applications can constantly scan code repositories (systems that store and control computer code), notifying auditors of any alterations, security risks, or policy breaches as they happen.

Watch the video to see how we can help

The video is playing. This video is playing in mini-player mode.

For more insight and guidance, get in touch with Alex Hunt and Nikhil Asthana.