Article

AI in cyber security: Are you up to speed?

By:
Miles Davis
Team work image
AI is booming, with new use cases every day, but many financial services firms aren’t making the most of the opportunities available. Manu Sharma and Miles Davis look at how to leverage AI in cyber security, for both preventative and responsive approaches.
Contents

The financial services sector remains a high-profile target for cyber criminals, and the threat landscape is increasingly complex with the use of AI-powered cyber-attacks. As such, many firms are asking their cyber security teams to match these activities and make good use of AI for defensive purposes. But it’s not always clear where to start, partly because many firms don’t have the necessary skills to identify potential use cases or, where these use cases are clear, to put them into practice. Keeping pace with the evolving threat landscape is essential, or firms will face greater cyber threat exposure, risking customer data and increasing the potential for service outages.

Meeting regulatory requirements

Successful cyber-attacks are a significant problem for all organisations, with expensive recovery work and lasting reputational damage. For the financial sector, there’s also operational resilience to consider and firms need to be able to restore outages within pre-defined tolerance limits. While the onus is on restoring services promptly, that doesn’t absolve firms of their need to actively prevent cyber breaches. This is also important in the context of Consumer Duty, where a successful attack due to poor cyber risk management could result in customer harm, and lead to the implementation of remedial plans to compensate customers for their loss and distress.

Financial services firms need to use all available tools, including AI, to strengthen their cyber security posture and demonstrate an appropriate control environment. This includes both effective cyber risk management and a good control framework for the use of AI itself (including robust oversight and challenge over AI use by third-party organisations).

Threat intelligence and predictions

AI can proactively identify potential cyber risks by analysing both global intelligence and firm-specific data for a more cohesive view of the threat landscape. Natural language processing (NLP) can assess dark web sources, cybersecurity reports and industry databases for greater horizon scanning and signal detection. AI algorithms can also analyse patterns from past incidents to determine potential attack vectors.

Meanwhile, forward-looking models can assess the likelihood of specific threats (such as ransomware or phishing) based on historical data, behavioural indicators and real-time analysis. Firms can also apply learning models to detect emerging trends or vulnerabilities specific to the organisation’s own infrastructure.

Drawing on the above, AI can then assign risk-scores and prioritise potential threats, considering their sophistication, impact and likelihood. This can help firms allocate resources towards high-priority areas and free up skilled personnel to focus on more challenging activities.

AI-driven detection and monitoring

Perhaps the most mature use of AI in cyber security is real-time threat and anomaly detection across networks and systems. Machine learning models can monitor network traffic and reduce the potential for distributed denial-of-service (DDoS) attacks and data exfiltration. Firms can implement user and entity behaviour analytics (UEBA) to identify where an account or application is acting in an unusual way, indicating that it could compromised. This is further supported by AI to detect suspicious activity on end-user terminals, such as laptops or mobiles, and Internet of Things (IoT) devices.

Automated learning, behavioural modelling and anomaly detection in Privileged Access Management also allows firms to proactively identify inappropriate access to confidential information. These types of detections are vital, with Crowdstrike’s 2025 Global Threat Report highlighting that up to 79% of attacks it detected in 2024 were achieved without malware and were typically the result of compromised accounts.

While the above are powerful tools in their own right, firms can combine them with identity and access management systems (IAMs) to automatically adjust permissions and flag exceptions in real-time.

Automated incident response and recovery

In the event of a cyber breach, firms need to secure their systems and restore services promptly, in line with operational resilience obligations. While most financial services firms will have an incident response playbook, many are missing opportunities to automate those actions based on pre-set trigger events. Automated responses include blocking malicious IP addresses, locking compromised accounts or isolating infected devices. Taking these practices a step further, incident response teams can use decision-tree AI models to base actions on incident context and regulatory obligations.

Intelligent threat containment is a key element of a modern cyber-resilience approach, using AI to identify the affected systems, determine attack spread, and proactively isolate critical assets while avoiding operational disruptions. Good integration with security orchestration, automation and response (SOAR) tools can automatically apply pre-defined remediation protocols, speeding up cyber incident response times.

Turing our attention to cyber incident recovery, AI-based analytics can carry out root cause analysis to identifying the attack vectors used and the vulnerabilities exploited. NLP tools can further support this area by generating comprehensive incident reports for compliance, including insights on attack patterns and suggested preventive measures moving forward.

Boosting training and awareness

Good use of AI can support robust cyber security training, simulate attacks and reinforce a good security culture. Phishing remains one of the most common attack vectors, and firms can use AI to generate personalised phishing simulations, based on known tactics and real-world threats.

Combing this approach with behavioural analysis can identify individuals who may be at higher risk of security breaches, for example those working with sensitive data, and tailor training campaigns accordingly. Cyber criminals often weaponise AI-powered approaches to identify soft targets and ‘whales’ – high-profile targets for spear phishing. Firms can use the same technologies to get ahead of the risks, using AI to curate interactive security awareness programs that provide real-time feedback to reinforce good practices and increase engagement.

Continuous learning and threat adaptation

Good cyber security is an ongoing process, and firms need to continually retrain and fine-tune their AI models with new data to make sure they’re up to date. This includes adversarial machine learning to test AI models against simulated cyber-attacks and emerging threats. Firms can also draw on post-incident reviews, insights from industry-wide forums and regulatory bodies, and shared data to improve AI effectiveness.

In the rush to move forward, it’s also important to be aware of threats from within the AI model itself. There have already been instances of poisoned AI models being deployed in commercial settings. They have in-built backdoors and vulnerabilities for attackers to take advantage of, including breaking out of the data silos used for training purposes. The AI can then be used as a new attack-surface for cyber criminals, and many businesses may benefit from expert support to build and deploy them safely.

Regulatory compliance and reporting automation

AI can help firms maintain regulatory compliance, including cyber and data-focused regulations such as the Payment Card Industry Data Security Standard (PCI-DSS) and the General Data Protection Regulation (GDPR). Machine learning algorithms can enable risk management dashboards to flag exceptions in real time, streamline reporting processes and reduce the potential for regulatory breaches.

As the regulatory landscape evolves, NLP can also scan for new guidance and automatically adjust compliance settings. However, as with all aspects of AI and machine learning, it’s essential to make sure there is appropriate governance and oversight in place to sense-check those activities.

Getting started with AI in cyber security

As its use becomes more widespread, AI is becoming an increasingly important element of the cyber threat landscape. Firms that continue to rely on manual cyber security processes may struggle to keep pace with AI-powered attacks, increasing the potential for a security breach. Taking early action to embed AI in cyber security can make a significant impact to all financial services firms and support regulatory compliance. However, it relies on good leadership and clear alignment with wider organisational goals.

To get started, firms can create a cyber risk committee of key stakeholders – including senior executives, IT heads, data scientists and compliance leads – to define AI’s role in cybersecurity. This includes creating AI policies over usage, key constraints, data privacy and alignment with regulatory standards, such as Consumer Duty, operational resilience and GDPR.

The preferred approach will need to align with the firm’s broader objectives, risk appetite and risk management frameworks. As with any other risk management approach, it’s vital to embed effective governance processes, establish metrics for success and track performance. This will support continual improvement plans and help maintain a robust cyber security environment in the long term.

For insight and guidance on the use of AI in cyber security for financial services, contact Manu Sharma or Miles Davis.