A legal system unprepared

If ethical concerns surrounding AI are not fully addressed, the technology may be called into question before Canada’s courts. Whether they are equipped to handle the fallout remains unclear.

With Canada’s development of AI systems and services quietly forging ahead, fears of privacy violations and discrimination are building toward an inevitable flashpoint: a spate of legal challenges to automated decision-making.

Given that AI has or will be used to sort through immigrant and visitor applications, analyze immigration litigation data and triage express entry applications, the potential exists for any of these decisions to be challenged in a court of law.  

Decisions made within Canada’s immigration system have already been brought before the courts – and in some cases, climbed to the top.

In a landmark judgement in 1999, the Supreme Court of Canada overturned a Jamaican-born woman’s deportation order after determining that the immigration officer overseeing her case failed to properly consider the rights of her children. Mavis Baker – who birthed four children in Canada and suffered from several mental illnesses – asked the courts to review her failed bid for permanent residency, which she made on humanitarian and compassionate grounds.

 

In a judgement written by Justice Claire L’Heureux-Dubé, the top court ruled that the interests of Baker’s children were not properly considered, and neither were Baker’s given that she “was ill and might not be able to obtain treatment in Jamaica, and would necessarily be separated from at least some of her children.”

“Statements in the immigration officer’s notes gave the impression that he may have been drawing conclusions based not on the evidence before him.”

– Justice Claire L’Heureux-Dubé

The court also found that the officer’s decision was rooted in bias. The decision was based on notes written by a junior officer handling the case.

“Statements in the immigration officer’s notes gave the impression that he may have been drawing conclusions based not on the evidence before him, but on the fact that the appellant was a single mother with several children and had been diagnosed with a psychiatric illness,” the judgement read.

[documentcloud url="https://www.documentcloud.org/documents/5448644-Baker-v-Canada-1999/annotations/497970.html"]

But how might the court have ruled if it wasn’t an immigration officer who dictated Baker’s fate, but a computer determining a potential immigrant or refugee claimant’s future?

The question is now being raised by Canadian legal experts who wonder whether the federal government is prepared to handle legal challenges to decision-making services powered by artificial intelligence.

One of them is Adam Goldenberg, a Toronto-based lawyer who advises clients about AI issues including ethics and emerging legal frameworks.

Goldenberg says resolving this dilemma begins with the legal system’s definition of “reasonableness”, or expected standards of human conduct.

“What the court looks at when they look at whether a decision was reasonable is a range of factors that give some wiggle room to the decision-maker not to be held to a standard of perfection,” explained Goldenberg.

“What does it mean to be reasonable if a system is making decisions on the basis of assimilating billions of data points at a rate far quicker than any human brain could ever do?”

– Adam Goldenberg

At FWD50, an Ottawa conference about digital government last November, Goldenberg posed a critical question about this concept to his audience of public servants.

“What does it mean to be reasonable if a system is making decisions on the basis of assimilating billions of data points at a rate far quicker than any human brain could ever do?”

The short answer? We don’t know.

But considering that AI initiatives are unrolling across a number of federal departments, we should.

Goldenberg delivers a presentation on ethical AI in government during Ottawa’s FWD50 conference in November 2018. [Photo © Raisa Patel]

According to Petra Molnar, it’s already difficult to challenge a decision made within Canada’s immigration and refugee system.

“It really does have life or death impacts in some ways,” Molnar said. “These are well-known sectors for being more opaque, more discretionary and for people being less able to exercise their procedural and substantive rights.”

In a notice of tender published last spring, IRCC indicated that Pre-Removal Risk Assessments and Humanitarian and Compassionate applications – typically last resort options used by people in particularly dire situations – would be potential targets for automated processing. Molnar said the department has since told her it is not planning to use AI for such applications, but she is not entirely convinced.

“Our major concern is moving ahead with experimenting with these technologies before having an accountability and an oversight mechanism in place,” Molnar said.

Treasury Board’s directive is the closest document to such a mechanism, but federal departments don’t need to comply with the guidelines until April 1, 2020. And because the directive is a piece of policy, it’s not as strong as legislation.

“It’s not binding like it would be legislation. Nobody is going to jail if they don’t [follow] it,” said Ashley Casovan.

There are some consequences associated with a failure to comply. Depending on how severely the directive is disregarded, fallout for individuals ranges from receiving more training to losing their job, while institutions might be asked to work more collaboratively or face reorganization.

Federal departments have until 2020 to work on their AI projects before Treasury Board begins to monitor compliance, but it’s unclear whether adhering to the directive is required for decision-making processes already subjected to the technology.

Goldenberg is certain that decisions made by computer will be challenged on the basis of being “procedurally unfair”. He’s referring to a legal principle within administrative law, the area of law that involves government operations and decisions. It concerns a person’s right to be heard, to due process, to appeal and to have an impartial decision-maker. Respecting this concept is a key component of Treasury Board’s directive.

It was also the guiding principle behind the court’s judgment in Mavis Baker’s Supreme Court case. In the same way that judges analyzed notes written by an immigration officer involved with Baker’s file, so too will they need to understand an AI system’s algorithms and how they were created.

Tech under wraps

But while Goldenberg highlights government transparency as one of the most necessary paths forward, Molnar identified another roadblock.

“The whole other piece to this is the role of the private sector,” she said.

“You can’t really develop all these innovative new tools without direct involvement of the private sector, but then you get into issues of intellectual property and proprietary rights. Asking for a full transparency mechanism has a lot of pushback.”

A key section of Molnar’s Citizen Lab report, warning of the dangers of AI in Canada’s  immigration and refugee system, is about transparency in the private sector. [Photo © Raisa Patel]

Across our southern border, the clash between technology firms and the legal system has already begun.

In 2013, a judge sentenced a Wisconsin man named Eric Loomis to six years in prison – a number determined after considering both Loomis’ criminal record and a risk-assessment algorithm. The policing tool used was designed to measure the likelihood of a defendant committing another crime and it identified Loomis as high risk. He appealed the ruling on the grounds that his rights to due process were violated because the algorithm itself was created by a private company and was therefore not open to scrutiny. While Loomis was white, the algorithm had also been independently determined to falsely rate offenders of colour at a higher risk of reoffending than white offenders.  

The Wisconsin Supreme Court ruled against Loomis. It was found that without using the tool, his high-risk rating still stood. But Justice Ann Walsh Bradley, writing for the court, warned that such assessments “must be subject to certain cautions.” Besides unfairly rating minority offenders, such cautions included the fact that the assessments were based on a national sample of offenders, which might not be accurate at the state level.

Loomis appealed to the Supreme Court of the United States to overturn the decision, but the country’s top court declined to hear the case.

In Canada, the Directive on Automated Decision-Making addresses the issue, stating that proprietary technology should be made publicly available when possible. It notes that the government will have access to systems if they are required for a “specific audit, investigation, inspection, examination, enforcement action, or judicial proceeding, subject to safeguards against unauthorized disclosure.” It’s not clear, however, whether private companies will be forced to follow this policy.

As for those affected by AI-based decisions, Casovan said systems of redress will differ across departments.

“It’s really going to depend on the nature of what that decision is,” she said.

“For people being able to have some recourse associated with whether or not they got a visa to cross the border, then I would hope that that would be the case, and that’s what we’ve tried to encourage.”

In the legal system itself, the bulk of the responsibility currently lies with litigators to explain new technologies to judges. But Goldenberg said technological literacy is already being taught in some in law schools and that judicial education should follow suit. The lawyer suggested that in the future, certain divisions of the court or particular judges could handle technical matters related to AI. Or, federal and provincial governments might create specialized courts to deal with AI, as Canada did with its tax court, though such cases could still wind up at the Supreme Court of Canada.

For now, Molnar continues to spark discussions on stronger AI regulation with legal experts, academics and the immigrants and refugees at the heart of the debate.

“That’s why we wanted to raise this conversation now, before five, ten years down the line when real issues are going to start coming to the table,” she said.