On June 16, 2022, the Canadian government introduced Bill C-27, which introduced three Acts meant to modernize federal privacy laws. I reviewed the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act in previous blogs. I’m now going to focus on the Artificial Intelligence and Data Act (“AIDA”), found in Part 3 of Bill C-27.
AIDA is the first attempt by the Canadian government to enact laws specific to artificial intelligence (“AI”). Much remains to be seen as to how it will regulate AI, as Bill C-27 prescribes a lot of the substance of the legislation to the regulations, which have not yet been drafted (or at least haven’t yet been released). As we all know, “the devil is in the details,” and we don’t have them yet.
Purpose and Scope
The stated purpose of AIDA is to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements applicable across Canada for the design, development, and use of those systems, and to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.
An “artificial intelligence system” is defined as a technological system that, autonomously or partly autonomously, processes data related to human activities using a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions. AIDA defines “harm” to mean physical or psychological harm to an individual, damage to an individual’s property, or economic loss to an individual.
AIDA aims to regulate the following activities carried out in the course of international and interprovincial trade and commerce (“regulated activity”):
- processing or making available for use any data relating to human activities for the purpose of designing, developing, or using an artificial intelligence system; and,
- designing, developing, or making available for use an artificial intelligence system or managing its operations.
Business Obligations
A business that carries out a regulated activity with anonymized data will have to establish measures with respect to the manner in which the data is anonymized and how it will be used and managed. The regulations will provide the details about those measures that must be in place.
There are additional obligations for a business that makes available for use a “high impact system.” The details as to what will constitute a “high impact system” will also be in the regulations. At this point, it is unknown what AI systems will be caught by these additional requirements.
The additional obligations that will apply to those making available for use or managing a high-impact system include adopting measures to identify, assess, mitigate and monitor risks of harm or biased output from the AI system; and publishing on a website a description of the AI system that provides several explanations, including the types of content that it is intended to generate and the decisions, recommendations or predictions it is intended to make.
The Stick
AIDA affords the Minister power to conduct an audit if it has reasonable grounds to believe the provisions governing regulated activities have been contravened, as well as to order that a business responsible for a high-impact system cease using it or making it available for use. The Minister is also granted authority to publish on a website information about contraventions of AIDA, as well as information about an AI system that gives rise to a serious risk of imminent harm. Offenders of AIDA are subject to administrative monetary penalties and fines.
The Bot(tom) Line
The government has set lofty goals for AIDA. Unfortunately, much of the details required to determine if those goals can be met have yet to be released.