Canada Joins US, UK and More for Secure AI Standards

ai canada hero

A comprehensive set of guidelines has been issued for providers of artificial intelligence (AI) systems, whether developed independently or built upon existing tools and services. These guidelines aim to ensure that AI systems function as intended, remain available when needed, and protect sensitive data from unauthorized access.

The new guidelines were led by the U.S. and UK, with 16 other nations worldwide joining forces to participate, including Canada.

The document underscores the societal benefits of AI but stresses the need for secure and responsible development, deployment, and operation. It highlights the unique security vulnerabilities of AI systems, which must be addressed alongside standard cybersecurity threats. The guidelines emphasize that security should be a core requirement throughout the AI system’s life cycle, not just during development, to protect systems from hackers.

The guidelines are divided into four key areas, reflecting the stages of the AI system development life cycle. It’s pretty broad without anything specific:

  1. Secure Design: This section focuses on the design stage, covering risk understanding, threat modelling, and considerations for system and model design.
  2. Secure Development: Guidelines for the development stage include supply chain security, documentation, and management of assets and technical debt.
  3. Secure Deployment: This stage involves protecting infrastructure and models, developing incident management processes, and ensuring responsible release.
  4. Secure Operation and Maintenance: Post-deployment, this section provides guidance on logging and monitoring, update management, and information sharing.

The 20-page document is about making AI secure. But what about when AI takes over the world and threatens humanity (hello, Skynet)?

Back in August, the federal government quietly launched a consultation into AI tech like OpenAI’s ChatGPT. The feds want a Canadian code of practice for generative AI and it remains to be seen how this will unfold.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.