Canada’s AI Code of Conduct Approach Slammed as ‘Secretive’: Critic
Yesterday, François-Philippe Champagne, Minister of Innovation, Science and Technology, appeared before the Standing Committee on Industry, Science and Technology to discuss Bill C-27. The federal minister’s statements and direction were criticized by University of Ottawa Law Professor, Dr. Michael Geist, for lacking transparency.
Despite a year-long delay, Champagne’s 12-minute opening statement offered little clarity. He assured the committee that amendments were planned but failed to provide any drafted language, effectively halting the hearings before they began.
Champagne did propose some changes, particularly on privacy and AI regulations. He pledged to recognize privacy as a fundamental right and to impose stricter rules on children’s privacy.
On the AI front, he promised to define “high impact” systems and establish distinct obligations for AI services like ChatGPT. However, the minister stated that the details would not be available for weeks.
This lack of transparency aligns with the government’s previous approach to privacy and AI reform. Champagne’s appearance came just hours before the release of a non-binding Generative AI Code of Practice, negotiated in secret and only made public after an inadvertent posting on a government site, noticed by Geist himself at the time.
The committee now faces a dilemma: proceed with hearings based on an outdated bill or wait for the government to release draft amendments? Given the significant changes proposed by Champagne, there is a growing call for the suspension of hearings until the draft amendments are public.
“Canadian AI policy would be more credible if it were not drafted largely in secret, if officials weren’t scrambling to consult excluded stakeholders, and if the text of planned Bill C-27 amendments were released before witnesses appear,” added Geist on Wednesday.
Canadian AI policy would be more credible if it were not drafted largely in secret, if officials weren’t scrambling to consult excluded stakeholders, and if you released text of planned Bill C-27 amendments before witnesses appear.https://t.co/A1COTgyOIihttps://t.co/jIOsJLLFr9
— Michael Geist (@mgeist) September 27, 2023
This sentiment underscores the need for greater transparency and inclusivity in the legislative process, particularly for a bill with far-reaching implications like C-27.
The criticism comes after ISED announced its Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems today, effective right away.
The code says it follows six core principles:
- Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities.
- Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety, including addressing malicious or inappropriate uses.
- Fairness and equity: Organizations will assess and test systems for biases throughout the lifecycle.
- Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.
- Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.
- Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.
Large language learning models, such as OpenAI’s ChatGPT, are evolving at a furious pace. Governments can’t seem to keep up with the tech, let alone regulate it.