Canada’s Use of AI in Immigration and Refugee Decision-Making Raises Alarms

Researchers Warn of Potential for Breach of Human Rights, Privacy

The University of Toronto’s Citizen Lab recently released a report warning of the potential hazards of the federal government’s use of artificial intelligence (AI) to screen and process immigrant files.

The report’s authors say that using AI and similar technologies has the potential for discrimination in addition to serious privacy and human rights breaches.

Pilot Programs Under Scrutiny

Earlier in 2018, the federal government launched two pilot projects using AI systems to sort through temporary resident visa applications. According to Mathieu Genest, a press secretary and spokesperson for Immigration, Refugees and Citizenship Canada, these systems are used to help “triage” online applications.

Genest added that the AI systems are being used to “process routine cases more efficiently,” and that human immigration officers always make final approval or denial decisions in these cases.

But one of the report’s authors, Petra Molnar, says that Canada’s used similar tools since at least 2014 for these purposes.

Molnar also cautioned that AI is not as neutral as one might think, citing the potential for user-created bias affecting the decision-making algorithm. Molnar added that making changes to this algorithm can be quite difficult.

[It’s] clear that without appropriate safeguards and oversight mechanisms, using AI in immigration and refugee determinations is very risky because the impact on people’s lives are quite real.

Read follow-up article: Using AI to Augment Immigration Systems

AI Also Considered for Use in Humanitarian and Compassionate Applications

Other pilot projects have also been considered for immigration-centric uses. For instance, in April of 2018, the federal government looked at using AI or machine learning for sorting humanitarian and compassionate applications, in addition to pre-removal risk assessments.

Critics, including this report’s authors, argue that because immigration law is discretionary, refugees should not be subject to “technological experiments without oversight.” It would be profoundly unfair to interfere with that exercise of independent discretion through a systematic decision-making process which will result in a fettering of discretion.

Molnar and her co-authors have a list of seven recommendations for the federal government, chief among them the establishment of an independent oversight body. This body would review all uses of automated decision systems by the federal government.

Given the fact that these refugee streams are widely considered a “last resort” option for vulnerable groups fleeing conflict, relying on a machine to sort and process them interferes with the appropriate, fair and reasonable decision-making process which is justifiable, intelligible and sensitive to all the details, context, as well as the relevant facts and jurisprudence.

While Immigration, Refugees and Citizenship Canada claim that these technologies are meant to support, not replace, decision makers, it remains to be seen if AI will be used responsibly and relied upon to not compromise the integrity of the decision-making process.