UK has taken a significant step forward in AI technology development by publishing its inaugural global guidelines to ensure its secure advancement. Demonstrating the UK’s leadership in AI safety, agencies from 17 other nations have pledged their endorsement and collaboration in co-sealing these pioneering guidelines. These guidelines represent a concerted effort to elevate the cybersecurity standards surrounding artificial intelligence, emphasizing the imperative need for secure design, development, and deployment practices.
Crafted by the UK’s National Cyber Security Centre (NCSC), a division of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA), these guidelines have been formulated in collaboration with industry experts. Additionally, 21 international agencies and ministries have contributed to their development, including representatives from G7 nations and the Global South.
The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design. –CISA Director Jen Easterly
The Guidelines for Secure AI System Development stand as a collaborative global initiative aimed at fortifying the security protocols governing the evolution of artificial intelligence, fostering a safer and more resilient landscape for AI innovation and implementation worldwide.
The latest UK-led guidelines mark a global milestone, providing a comprehensive framework for developers utilizing AI across various systems. These guidelines aim to equip developers, regardless of whether they’re building systems from the ground up or leveraging existing tools, with the necessary insights to make informed cybersecurity decisions throughout development.
NCSC CEO Lindy Cameron,
We know that AI is developing phenomenally, and there is a need for concerted international action across governments and industries to keep up. These guidelines mark a significant step in shaping a truly global, shared understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.
Focusing on a ‘secure by design’ approach, these guidelines emphasize that cybersecurity isn’t just an add-on but a fundamental requirement for the safety of AI systems. The goal is to integrate cybersecurity seamlessly from the project’s inception, ensuring it remains pivotal throughout the developmental journey.
The official launch event, hosted by the NCSC, will unveil these guidelines. The gathering will bring together influential figures from industry, government, and international domains, encompassing a panel discussion on the mutual endeavor of securing AI. Notable participants include representatives from Microsoft, the Alan Turing Institute, and cybersecurity agencies from the UK, the US, Canada, and Germany.