No longer something spoken of abstractly in sci-fi movies, Artificial Intelligence (AI) is now regularly used by businesses to increase internal efficiencies and improve client/user experience with their service or product. So today, on Data Privacy Day, I want to take a closer look at the AI landscape in Canada.
Regulators have taken notice of the integral role that AI is now playing in business operations, and the importance of using it ethically. The Organisation for Economic Co-operation and Development (OECD) has identified over 700 AI policy initiatives in 60 countries.
Canada’s federal and provincial governments are moving forward with policy frameworks to ensure that AI is used responsibly by businesses. Although it failed to pass into law before the last federal election, Bill C-11 which would have amended the Personal Information Protection Electronic Documents Act, included provisions about algorithmic transparency.
In September 2021, the Quebec government amended its privacy laws to require businesses to provide an individual’s information related to AI-based decisions made about them.
Earlier this month, the Ontario provincial government released “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”. The principles were developed by The Trustworthy AI team with the Ontario Digital Service. The six principles are meant to “create clarity rather than barriers for innovation that is safe, responsible and beneficial”. The beta principles are:
- Transparent and explainable – a meaningful explanation should be made available when automation is used to make a decision.
- Good and fair – data enhanced technologies are to be designed and operated in a manner that respects societal values and individual rights, and the rule of law.
- Safe – appropriate safeguards should be in place to ensure data enhanced technologies operate as intended through their life cycle.
- Accountable and responsible – organizations that deploy AI systems should be accountable to ensure that they operate in accordance with the other principles. Systems should be peer-reviewed or audited regularly.
- Human centric – data enhanced systems should be designed “with a clearly articulated public benefit”; human centred design is encouraged.
- Sensible and appropriate – consideration should be given to the particular sector of society in which the data enhanced technology will be operated.
These principles are broad and open to expansive interpretation. They offer little guidance on how to practically apply them to businesses’ operations. Fortunately, the government has recognized that more work is needed to clarify these principles and is encouraging feedback on them.
What is clear is that regulators are looking to protect the rights of individuals when private and public sectors use artificial intelligence. Businesses need to be aware that regulations are coming if they are currently using artificial intelligence in their operations. They also need to ensure that their service providers that use artificial intelligence are doing so in a way that respects an individual’s privacy. Given the open nature of the internet and disparate views on the ethical use of personal data across the world, the use of artificial intelligence may present challenges, especially for small and medium enterprises that do not have a privacy officer in their organization.