Deep Dive in the Use of AI for Immigration Processing
Immigration application volumes are constantly rising, and clients are increasingly expecting quicker and easier interactions. Therefore, IRCC has been using big data and artificial intelligence (AI) to replace some human decision-makers in its immigration system. Of course, this raises concerns on how technology could lead to errors and assumptions for immigrants’ applications.
In this article, we’ll review the automated decision systems that IRCC has been using in applications and the guiding principles behind them.
- Overview
- In recent years
- 2021 policy on Automated Support for Decision-making
- The Automator’s Handbook
Use of AI for Immigration Processing: Overview
A report released in 2018 by the University of Toronto’s Citizen Lab suggested that Canada has been experimenting with AI since 2014. “The government has also quietly sought input from the private sector related to a 2018 pilot project for an ‘Artificial Intelligence Solution‘ in immigration decision-making and assessments,” says the report. It also mentioned this included Humanitarian and Compassionate applications and Pre-Removal Risk Assessments. Vulnerable people use these two applications as a last resort to remain in Canada.
Earlier that year, federal officials launched a pilot project to have an A.I. system sort through temporary resident visa applications from China and India. The analytics program helped officers triage online visa applications to process some cases more efficiently.
According to an IRCC presentation (2020), the department did use “advanced analytics and machine learning to automate a portion of the Temporary Resident Visa (TRV) business process, focusing on online applications from China and India”.

However, according to the same presentation, they were not:
- Automating decisions in business lines like Asylum, Humanitarian & Compassionate cases, Pre-Removal Risk Assessment.
- Using “black box” algorithms that make determinations in unknowable or unexplainable ways.
- Planning to displace the central role of officers in immigration decision-making.
In recent years
In 2020, the government of Canada released a Directive on Automated Decision-Making, which outlined the responsibilities of federal departments using AI. It also released the Algorithmic Impact Assessment tool to help institutions understand the implementation of AI. In addition, the tool determined the impact of automated decision systems.
Parsai Immigration Services did some research about IRCC’s AI pilot projects. But, there was not much public information on how exactly IRCC is using AI to deliver programs and services. According to another presentation from the Government of Canada Data Conference: An Integrated Data Community for Building Back Better (February 2021), this is what IRCC did in 2021:

Therefore, our firm obtained the policy on the automated decision system via an information request to IRCC, to have a better understanding of the guiding principles behind these projects. Let’s explore it!
Policy Playbook on Automated Support for Decision-making: 2021 edition
The policy, which is a working version/draft, guides individuals involved in developing and implementing automated systems that support decision-making. Moreover, it covers some legal considerations and tips to align with administrative, human rights, and privacy law.
The policy targets automated systems that support, in whole or in part, administrative decisions. This includes systems that:
- classify cases according to the level of scrutiny they require;
- flag cases for human review or investigation;
- provide recommendations about whether applications should be approved;
- or render complete decisions.
According to the policy, these systems take up a new role in IRCC’s decision-making model. And the rules that these systems apply “could be derived from sophisticated data analytics, or from interviews with experienced officers”.
“A new policy on automated support for decision-making is an opportunity to ensure that the Department’s thinking keeps pace with the speed of technological change, and that our people and practices continue to deliver a suite of programs equal to the expectations of Canadians and the world.”
12 Guiding Principles
The following set of principles outlines IRCC’s overarching goals:
- The use of AI and automation should deliver a clear public benefit. IRCC should use these tools wherever it can do so responsibly, effectively, and efficiently.
- Humans (not computer systems) are accountable for decision-making, even when decisions are carried out by automated systems.
- Because IRCC’s decisions have significant impacts on the lives of clients and Canadians, the Department should prioritize approaches that carry the least risk.
- Black box algorithms can be useful but cannot be the sole determinant of final decisions on client applications.
- IRCC must recognize the limitations of data-driven technologies and take all reasonable steps to minimize unintended bias.
- Officers should be informed, not led to conclusions.
- Humans and AI play complementary roles. IRCC should strive to sharpen the roles of each.
- Also, IRCC should continually adopt emerging privacy-related best practices in a rapidly evolving field.
- IRCC should subject systems to ongoing oversight, to ensure they are technically sound, consistent with legal and policy authorities, fair, and functioning as intended.
- Also, IRCC must always be able to provide a meaningful explanation of decisions made on client applications.
- IRCC must be transparent about its use of AI. It must provide meaningful access to the system while protecting the safety and security of Canadians.
- IRCC’s use of automated systems must not diminish a person’s ability to pursue recourse.
The Automator’s Handbook
The handbook guides IRCC staff through questions that should be considered at various stages of AI and automation projects. This includes early exploration and ongoing monitoring once a system is running. In other words, the handbook helps determine whether or not an automated decision system is a good solution to the problem that the staff is trying to tackle.
Furthermore, the handbook analyzes the following areas:
-
Automated support for decision-making as a potential solution
- General suitability: In situations where reasonable minds may differ, the handbook doesn’t recommend automation. Conversely, staff can pursue automation if analysis of past decisions has shown that virtually any officer would reach the same conclusion.
- Preliminary diagnostics and impact assessments: At this stage, the staff should focus on gathering disaggregated data about clients, analyzing this data for quality and historical bias, and checking their assumptions.
- Training: this may range from courses on digital government – such as those offered by the Canada School of Public Service’s Digital Academy- to training on privacy and data literacy.
- Partner and stakeholder engagement: the staff should deliberately seek views from a diverse group of stakeholders and document their perspectives, as you would when developing a significant policy or legislative change.
- Planning for design: involves the resources that the staff will use for data analytics experimentation and iterative systems development.
- User-centered approach: prior to undertaking a project, staff will need to have a good understanding of the general operating environment in question.
-
Designing the system
- Model suitability: exploring, mocking up, and testing some alternatives to confirm the initial hypothesis and strengthen the business case.
- Algorithmic Impact Assessment (AIA): completing a preliminary assessment at the design stage to help to anticipate risks associated with the project.
- User-centered design: thinking about both the most appropriate way to use technology and about the best way to involve humans.
- Fairness and non-discrimination: thinking carefully about how the addition of automation will change application processing and decision-making.
- Explainability and transparency: as a general rule, explanations should: (1) help clients understand a particular decision, and (2) provide grounds to contest the decision should the client wish to do so.
- Privacy: IRCC should work with a privacy expert -within the advanced analytics teams- to ensure that privacy is considered at every stage.
- Working in the open: involves the release of reports about AI and automation.
- Accountability and security: a review of compliance with existing cybersecurity policies and identified security controls.

Image: Part III – An overview of legal considerations and practical tips | Policy Playbook on Automated Support for Decision-making.
-
Preparing for launch
This section talks about the process for getting final approval. It also includes an assessment of the user’s, the system’s, and the partners’ readiness.
-
Once up and running
The section answers questions like: Is the system still functioning as originally intended? Do any intervening factors point to the need for a review?
The Automator’s Handbook also includes information such as legal considerations and practical tips, a checklist for the directive on automated decision-making, and baseline privacy requirements. Let’s focus on the last one.
Baseline Privacy Requirements
This section starts with a very important phrase: in Canada, privacy is considered a human right. The privacy requirements laid out in the document are based on the:
- Privacy Act,
- Treasury Board Secretariat policies,
- directives,
- guidelines, and
- internal IRCC guidance.
Moreover, according to the policy, the staff should consider the following, when planning, developing and monitoring any initiative involving data-driven technology:
- Legal Authority: A program must identify the parliamentary authority to collect and use personal information for the specified purposes of the program.
- Notice and informed of purpose: IRCC must notify individuals of the purpose for which their information is being collected, commonly referred to as a ‘privacy notice’.
- Transparency: IRCC must notify past applicants that their information was used to train or build models.
- Explainability: Individuals have a right to know exactly how their personal information was processed through a disruptive technology system.
- Accuracy: IRCC must take all reasonable steps to ensure that personal information used for an administrative purpose is as accurate, up-to-date and complete as possible.
These are just some of the minimum requirements that all projects must meet. We hope to see IRCC’s data scientists, program designers, and policy developers trained to prioritize these ethical considerations in the development of automated support for decision-making systems. Meanwhile, we will keep you informed about other technological advances in Canada’s immigration system.
Related Articles:
IRCC is now using advanced data analytics to process:
- TRV applications submitted from outside Canada, and
- spousal and common-law partner sponsorship applications submitted in Canada
You can also read:
- Chinook: the controversial tool used by IRCC
- Canada can verify how many days immigrants stay in the country
Ask your questions
If you have a question about this topic or immigration to Canada, please fill out the following form.
You can also read this in Spanish
Fill our Free Canada Immigration Assessment Form in your language!