Article

Secure AI: National Cyber Security Centre guidelines

By:
a woman working in technology
Artificial intelligence (AI) has the potential to transform how firms operate, but does come with real risks. Manu Sharma and Ankur Aeran look at the newly released guidelines for secure AI and explain what firms need to consider.
Contents

In an era earmarked by unprecedented advancements in AI, the transformative potential for firms is clear to see. With that said, firms must consider the assurance of secure AI and how to develop the technology appropriately and manage the associated risks.

Protecting data will be one of the key concerns in AI integration. Robust encryption and stringent access controls are necessary for good oversight and allow firms to embrace the technology fully. As AI is still emerging, transparency around AI models is still unclear, and could be a major risk for firms if not addressed.

Firms will need to adopt a proactive approach to their framework to ensure they fully understand their AI models and align with regulatory requirements. To protect sensitive information and remain secure long term, firms will need to have a comprehensive strategy to safeguard against AI risks. Regular audits and discourse with regulators will be crucial as we get to grips with this innovative technology.

Considering the risks

Understanding how to integrate AI first and foremost demands an understanding of the setbacks it can present. Although the technology has a range of benefits that could elevate the ability of firms to succeed, the content produced by AI is only as good as the data it's trained on. In its current state, the technology is at risk of presenting incorrect statements, having bias, and corrupting easily if the data is manipulated.

Poorly optimised AI tools can leave a firm vulnerable to attacks. If hacked, attackers can tamper with the data that the AI model is trained on and obstruct the information to be misleading and bias. Additionally, an attacker could also create an input designed to make the model behave in unintentional ways. The AI may reveal confidential information or generate offensive content. Therefore, firms must understand the importance of a strong AI model, and have the correct tools and oversight in place to ensure it's up to standard and can enhance operational frameworks.

AI is increasingly being used in phishing and other types of cyber attacks. Find out how you can mitigate these threats.
Tackling phishing in the age of AI
Read this article

NCSC considerations

The National Cyber Security Centre (NCSC) has recently published its guidelines for firms to consider when developing AI technology. These guidelines provide checkpoints to consider through the development of AI models to ensure security is at the centre of their implementation process. Their measures are structured into four sections, each looking at a different stage of the AI system development cycle:

  • Secure design
  • Secure development
  • Secure deployment
  • Secure operation and maintenance

These steps address how firms can securely implement AI. Following these steps correctly is necessary to ensure AI systems operate as intended and can function as intended without revealing sensitive data. To meet best practice, firms need to understand how to practically follow these steps and develop safe and secure AI tools to enhance their services.

Secure design

Secure AI relies on a competent design to remain compliant. Firms need to understand the risks, threat modelling, and specific components of their AI systems to ensure it meets their needs effectively and safely. Educating your team on how the AI will operate internally is important to remain knowledgeable on the threat and risks of AI systems.

Practical actions

Understand the security threats and risks of AI systems. Consider the security benefits and data trade-offs to embrace it fully.

 

Secure development

Once you establish the design, you need to consider how you will develop it into your current processes. You should assess and monitor the security of your AI supply chains across its life cycle and ensure that your third-party suppliers align with the standards of your organisation.

Practical actions

Ensure you're buying software components from verified sources. You need to ensure there are limited data risks in the supply chain and throughout the development of your AI models.

 

Secure deployment

Good infrastructure is the foundation of secure AI models. When developing your AI systems, you should apply strong security principles to help mitigate against cyber-attacks and ensure your data is secure. Reliable incident management procedures and backups will mitigate risks and give you the necessary oversight to remain fully transparent over your AI systems.

Practical actions

Provide your team with a comprehensive guide to using AI, highlighting the potential limitations and failures of the technology. Outline the security measures in place and what users need to be aware of to mitigate the risk of data leaks.

 

Secure operation and maintenance

Operating your AI model successfully requires frequent oversight. Once your system is up and running, monitoring your system input will become the priority. You need to ensure that you're logging all data into the system and assess it to remain fully compliant. You should also collect this information for collaboration. As regulators and firms alike get more familiar with AI, this will be important to develop best practice.

Practical actions

Report vulnerabilities and disclose any failures during implementation to guide other firms and create long term benefits across industry.

 

Moving forward, AI will become a common aspect of everyday operations across industry. With that said, it's important to treat AI with strong consideration and understand the safeguarding measures to develop robust model development practices.

Creating a security-aware culture around AI and the regulatory frameworks to follow is essential to extract the benefits of AI long-term and protect your services. Awareness at this early stage is crucial to developing strong tools and educating your team on where this technology is going.

For more insight and guidance, get in touch with Manu Sharma and Ankur Aeran.

In the financial sector, cyber security is subject to a range of complex regulations. We look at the key expectations.
Building cyber resilience through effective regulation
Read this article

tracking