Law Commission of Ontario AI report demands government accountability

The Law Commission of Ontario (LCO) has released a major new report considering the use of artificial intelligence and automated decision making in the Canadian justice system, with one recommendation being that the provincial government should not escape liability for the development and deployment of AI systems, including under new legislation passed in 2019.

‘Accountable AI’ analyses the use of AI and automated decision-making (ADM) systems by government, and includes 19 recommendations addressing bias in AI systems, including ‘black-box’ decision-making, due process, and the need for public engagement.

One recommendation that jumps out in the comprehensive report from Ontario’s leading reform agency is that provincial government should not be immune to tortious liability for government AI systems. It argues that the new Crown Liability and Proceedings Act 2019 should not be used to bar negligence claims against the government for developing, implementing, deploying and relying on AI systems. That Act expands the scope of Crown immunity to not just policy decisions, but the manner in which they are carried out.

Further recommendations include that provincial government should not deploy high risk AI or ADM prior to adoption within a ‘Trustworthy AI’ framework, which should be established in legislation.

Interestingly, the report argues that a dedicated framework is required to address AI systems that are developed or used in the criminal justice system, including facial recognition, which has been found in the past to have racial bias.

This report is part of the LCO’s ongoing AI, ADM and the Justice System project, which brings together policymakers, legal professionals, technologists and others to discuss the impact of AI and algorithms on access to justice, human rights and due process.

The full report is available here.