CISA, NCSC Publish Guidelines for Secure AI System Development, Call for "Radical Transparency"

The product of the Bletchley Declaration, the new guidelines set out principals for safe and secure AI development and deployment.

The US Cybersecurity and Infrastructure Security Agency (CISA), working with the UK's National Cyber Security Center (NCSC) and 21 other agencies from around the globe, has published a set of guidelines for the development of secure artificial intelligence (AI) systems — part of the international agreement known as the Bletchley Declaration.

"“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy," claims Secretary of Homeland Security Alejandro N. Mayorkas in support of the new guidelines.

"The guidelines jointly issued today by CISA, NCSC, and our other international partners," Mayorkas continues, "provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating 'secure by design' principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system's design and development. Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology."

The guidelines came about following the AI Safety Summit earlier this year, held in the UK's Bletchley Park — historic home of Station X and a team of codebreakers, including Alan Turing, who worked to break the German Enigma cipher during World War II. As part of the summit, attendees signed a policy paper dubbed the Bletchley Declaration which called for cooperation in addressing "frontier AI risk." Its first deliverable: a set of guidelines on the development of secure AI systems.

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world — to ensure the development and deployment of artificial intelligence capabilities that are secure by design," says CISA director Jen Easterly. “As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices.

"The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future."

The guidelines themselves have been published on the UK's NCSC website, and are split into four sections: secure design, secure development, secure deployment, and secure operation and maintenance. They call for a "secure by default" approach, borrowing from the NIST Secure Software Development Framework, and call for "radical transparency and accountability" and "building organizational structure and leadership so 'secure by design' is a top business priority."

The guidelines are available online and as a downloadable PDF; thus far, none of the major AI companies have come forward to state they will be adopting the recommendations contained within.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles