Pension administration under sharper regulatory scrutiny
ArticleTPR has sharpened expectations for pension scheme administration, highlighting key risks around governance, data integrity and oversight that trustees must act on.
Artificial intelligence (AI) can be a huge asset in risk and assurance: increasing accuracy and reliability in generating insights, streamlining processes to increase productivity, and enhancing quality. It can add value to your organisation by creating new ways of working and opening growth opportunities.
The potential applications for using AI in your own operations depend on your specific needs and business context. The key concern, however, will always be security of the information used by AI tools and we have noted our precautions below. As the full risks of AI are still being understood, it's up to every organisation to assess their own possible exposures.
Data can have various types and formats, such as written language in documents, tables, spreadsheets, or spoken language, recordings or transcripts. It can be hard to gather different kinds of data and examine them effectively every day for many people. By using powerful technologies, natural language processing (NLP) can process diverse and complex data formats and produce useful inputs for AI tools to use. NLP can help increase business value by enhancing the quality of reviews and audits, by organising, summarising, and analysing data.
There are several ways to apply intelligent document analysis techniques using NLP, including:
With optical character recognition (OCR), large amounts of documents such as contracts, invoices, and reports can be transformed from scanned images to text and sorted by NLP based on specified topics and criteria. Unstructured data like emails, images, and texts can be easily handled without manually reviewing the documents. The information extracted is then used to verify the correctness and frequency of transactions under review without spending a lot of time on document examination.
NLP can also help to find similarities in documents. It can mark similarities and possible duplicates based on the key features it identifies. This makes it easier to review documents and select corroborating files for audit evidence.
Large language models (LLMs) are a form of AI that can imitate human intelligence, can handle large amounts of data such as transaction history, recorded meetings or inventory levels. They are useful when data is unstructured i.e. plain text or media posts. AI tools can then let you query your data and give an instant answer. The incorporation of these kinds of insights from NLP analysis into risk management processes, with human supervision, can enable prompt actions and improve your assurance capability.
Predictive analytics uses machine learning to extract useful trends, patterns, and behaviour from historical datasets. It can provide deep insights into key risk indicators. With data mining and statistics, predictive analytics can help with risk assessment and testing of controls. It can also reveal current and future risks, and help prevent major problems before they happen.
Risk and assurance teams can leverage this capability to fully optimise the use of data by incorporating predictive analytics in the assessment of fraud. Combining the function to process large volumes of unstructured data with machine learning can detect unusual patterns and behaviours to identify fraud. This can help determine unusual behaviours and suspicious risk profiles from extensive data sets to trigger investigation.
We've developed a flexible solution embedded with predictive analytics for critical business processes.
Watch the video to see how we can help
AI can improve the quality and precision of vital risk information by analysing data automatically and extensively. AI can assist risk and assurance teams by collecting and integrating data from internal and external sources, providing a wider perspective of the whole enterprise.
These are some of the best ways that AI can be used to develop risk intelligence.
In addition to using machine learning for adverse event prediction as mentioned above, large amounts of data from different sources can be used to describe critical adverse situations that can harm the organisation. These sources could include social media, news articles and cyber alerts. Companies can use AI to rapidly detect possible threats and examine patterns and trends in the data.
Contextual data can help with comparing risks and ranking them by importance. Companies can use AI to give scores to the possible threats that they identify, and use these scores to decide which risks need more attention and resources.
Developing an audit programme is a crucial part of any audit engagement. By using Large Language Models (LLMs), such as ChatGPT or Microsoft Co-Pilot, the process can be more efficient - reducing time and resources by automating tasks. Auditors can use it for research and to enhance their audit plans to suit their particular scope and objectives. However, this is only a preliminary step, co-source expertise and SME knowledge is needed for customisation and best practice.
ChatGPT can produce a simple P2P control audit plan that covers P2P control audit goals, range, risk analysis, control review, testing methods, results, suggestions, reporting, and follow-up actions. The format ensures a comprehensive control assessment but will need adjustment for different organisations.
Benefits of using LLMs to assist with audit programme development
Precautions: auditors need to be mindful of a few cautionary points while using AI solutions
By using AI, auditors can examine computer code faster and more reliably than by checking it manually. This is useful for internal audits of important processes or automated controls that involve large or complicated code repositories. AI can support in more ways, such as:
AI tools can automatically scan and analyse code to find security issues, mistakes, or detect patterns that indicate potential issues. This helps auditors to efficiently address weaknesses, making their work more productive, allowing them to focus on more important tasks.
AI can verify code against coding rules, laws, or internal policies to uphold compliance. By doing this regularly, AI enhances the precision of detecting coding defects, risks, and compliance matters with the code. This is beneficial for complex or challenging codes, as it helps auditors efficiently spot potential security problems, coding errors, or deviations from compliance requirements.
AI-enabled applications can constantly scan code repositories (systems that store and control computer code), notifying auditors of any alterations, security risks, or policy breaches as they happen.
Watch the video to see how we can help
For more insight and guidance, get in touch with Alex Hunt and Nikhil Asthana.
TPR has sharpened expectations for pension scheme administration, highlighting key risks around governance, data integrity and oversight that trustees must act on.
UK crypto regulation is accelerating as the FCA issues new consultations. Learn what firms must do to prepare for authorisation under the incoming regime.
Boards are increasingly being called upon to take ownership of technology risk oversight as a strategic imperative, reinforced by the updated UK Corporate Governance Code and the new Cyber Governance Code of Practice. In 2026, staying ahead of technology risks and regulatory shifts isn’t optional - it’s essential. Are you clear on where to focus to keep your organisation in control?
Get the latest insights, events and guidance, straight to your inbox.