Yesterday, the NCSC – National Cyber Security Centre – released new AI security guidelines. These are “for providers of any systems that use artificial intelligence… whether… created from scratch or built on top of tools and services provided by others.” In this article, we take a closer look at these guidelines and ponder AI security in general.

NCSC, aka ‘The National Cyber Security Centre’

The NCSC is part of the Government Communications Headquarters (GCHQ), and provides advice, guidance and support on cyber security. It also manages cyber security incidents for the most critical organisations in the UK, the wider public sector, industry, SMEs and the general public. When cyber incidents occur, the NCSC delivers effective incident response to minimise harm and to help with recovery.

AI, aka ‘Artificial Intelligence’

Most of us are aware of AI in 2023 and many of us are interacting with it on a regular basis. Definition-wise, there is a particularly good piece on artificial intelligence on the IBM website. There you will find thoughts from John McCarthy, who essentially deems AI to be intelligent computer programs that are not necessarily “biologically observable.” IBM then proceed to refer to works from Alan Turing plus also Stuart Russell and Peter Norvig, all of whom offer some very interesting observations.

CRIBB Cyber Security: “Please give me a simple definition of AI.”

ChatGPT: “Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, language understanding, and speech recognition. AI aims to create machines that can mimic human cognitive functions, enabling them to adapt and improve their performance over time.”

NB: We simply could not resist publishing a definition of AI provided by AI.

NCSC Guidelines for secure AI system development

Released on Monday 27th November 2023, the guidelines are for providers of any systems using AI. Regardless of how the systems have been created, the guidelines will help providers ensure they function as intended. Perhaps more crucially, they will safeguard against sensitive data being revealed to unauthorised parties. The NCSC has created the guidelines in response to the huge potential AI has to benefit society. It has also acted in recognition of the need for it to be operated securely and responsibly.

The guidelines have been separated out into four key areas within the AI system development life cycle:

  • Secure design (this covers understanding risks and threat modelling, plus specific aspects to consider on system and model design)
  • Secure development (this applies to the development stage and covers points including supply chain security and documentation)
  • Secure deployment (this includes points on protecting infrastructure and models from compromise, threat or loss plus responsible release)
  • Secure operation and maintenance (this provides guidelines on relevant actions to take once a system has been deployed, including logging and monitoring plus update management)

The new guidelines follow a ‘secure by default’ approach and prioritise taking ownership of security outcomes for customers. They also place high emphasis on embracing radical transparency and accountability. Ultimately, the aim is for security by design to become a top business priority. Read through the guidelines at the NCSC website.

AI Security & key takeaways

The new AI security guidelines from the NCSC will hopefully help to assuage growing fears over AI Security in general. This is a critical aspect of managing the risks associated with the increasing integration of artificial intelligence into various aspects of our lives. As AI systems become more sophisticated, they also become potential targets for malicious activities. AI security therefore involves implementing measures to protect AI systems from unauthorised access, data breaches, and manipulation. It encompasses the development of robust authentication mechanisms and encryption techniques, helping to prevent attacks aimed at exploiting vulnerabilities in AI algorithms. Additionally, AI security addresses ethical considerations, ensuring that AI applications adhere to privacy regulations and guidelines. As AI continues to evolve, a proactive approach to AI security becomes paramount to foster trust in these technologies. It also mitigates potential threats to both individuals and organisations, and these guidelines are a very good means of making a start.