{"id":161,"date":"2019-05-02T16:46:42","date_gmt":"2019-05-02T16:46:42","guid":{"rendered":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/?page_id=161"},"modified":"2020-01-16T00:32:14","modified_gmt":"2020-01-16T00:32:14","slug":"governing-ethical-ai","status":"publish","type":"page","link":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/","title":{"rendered":"Governing ethical AI"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; _builder_version=&#8221;3.22.5&#8243; background_image=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-content\/uploads\/sites\/20\/2019\/06\/molnar-report-2.jpg&#8221; parallax=&#8221;on&#8221; min_height=&#8221;779px&#8221; custom_padding=&#8221;222px||2px|||&#8221;][et_pb_fullwidth_header title=&#8221;Governing ethical AI&#8221; subhead=&#8221;The use of artificial intelligence and digital identity might streamline the refugee experience, but what if these technologies harm more than they help?&#8221; content_max_width=&#8221;63%&#8221; _builder_version=&#8221;3.22.5&#8243; title_font=&#8221;Jura|600|||||||&#8221; title_text_color=&#8221;#ffffff&#8221; title_font_size=&#8221;66px&#8221; title_line_height=&#8221;1.3em&#8221; title_text_shadow_style=&#8221;preset2&#8243; title_text_shadow_blur_strength=&#8221;0.05em&#8221; subhead_font=&#8221;Georgia||||||||&#8221; subhead_text_align=&#8221;left&#8221; subhead_font_size=&#8221;22px&#8221; subhead_line_height=&#8221;1.4em&#8221; subhead_text_shadow_style=&#8221;preset2&#8243; subhead_text_shadow_blur_strength=&#8221;0.04em&#8221; background_color=&#8221;rgba(0,40,150,0)&#8221; min_height=&#8221;461px&#8221; custom_padding=&#8221;0px|||||&#8221; text_shadow_style=&#8221;preset2&#8243;][\/et_pb_fullwidth_header][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.22.5&#8243; min_height=&#8221;7832px&#8221;][et_pb_row custom_padding=&#8221;|||||&#8221; custom_margin=&#8221;|||||&#8221; _builder_version=&#8221;3.22.5&#8243; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; width_last_edited=&#8221;on|desktop&#8221; module_alignment=&#8221;center&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\"><span class='et-dropcap'>I<\/span>mmigration lawyer Petra Molnar insists she\u2019s not a fearmonger. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In September 2018, Molnar and human rights researcher Lex Gill issued a damning <a href=\"https:\/\/citizenlab.ca\/wp-content\/uploads\/2018\/09\/IHRP-Automated-Systems-Report-Web-V2.pdf\">88-page report<\/a> warning about the use of artificial intelligence in Canada\u2019s immigration and refugee system. The report was published through the University of Toronto\u2019s human rights program and the Citizen Lab and bore a chilling title: Bots at the Gate.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cThe ramifications of using automated decision-making in the immigration and refugee space are far reaching,\u201d the report noted. \u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cThe nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others.\u201d <\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-content\/uploads\/sites\/20\/2019\/05\/molnar-pose.jpg&#8221; _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; text_line_height=&#8221;1.4em&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;-20px|||||&#8221;]<\/p>\n<p><em><span style=\"font-weight: 400;\">Petra Molnar in February 2019, visiting Ottawa for a symposium on ethics and AI. Molnar said the idea behind Bots at the Gate was sparked a year earlier, when she first read about new technologies being used in the immigration and refugee sector. [Photo \u00a9 Raisa Patel]<\/span><\/em><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221;]<\/p>\n<p><span>But Molnar maintains that she\u2019s not trying to instill terror over the use of AI in these systems. Her goal is to create nationwide dialogue about the responsible use of automation in government, a trend that is no longer speculative. <\/span><\/p>\n<p><span>\u201cIt\u2019s important to have a balanced discussion,\u201d Molnar said, \u201cbecause fear mongering isn&#8217;t really going to get us anywhere.\u201d<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;||0px|||&#8221;]<\/p>\n<h3><span style=\"color: #000080;\"><strong>Bots on the ground<\/strong><\/span><\/h3>\n<p><span style=\"color: #000080;\"><strong><\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Artificial intelligence in this context refers to a form of technology that sorts through data to mimic the human decision-making process. In Canada, using these tools for automated decision-making isn\u2019t something the federal government is thinking of using: it\u2019s already doing so. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">For the past several years, Ottawa has been quietly developing AI solutions to streamline a number of its administrative processes in the immigration and refugee sector. As early as 2014, the government has used predictive analytics to sort through immigrant and visitor applications. Then, in an early 2018 <\/span><a href=\"https:\/\/buyandsell.gc.ca\/procurement-data\/tender-notice\/PW-EE-017-33462\"><span style=\"font-weight: 400;\">request for information<\/span><\/a><span style=\"font-weight: 400;\"> about AI tools for several federal departments, it was revealed Immigration, Refugees and Citizenship Canada (IRCC) was seeking an \u201cartificial intelligence solution\u201d for a number of processes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the request, IRCC \u2013 in partnership with the Justice Department \u2013 sought a \u201cbroad-based solution that can analyze large volumes of immigration litigation data to assist in the development of policy positions, program decisions and program guidance.\u201d The notice mentioned using historical data, identifying factors to predict successful litigation outcomes and putting this information in the hands of administrative decision-makers.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cIn other words,\u201d Molnar\u2019s report explained, \u201cthe proposed technology would be involved in screening cases for strategic legal purposes, as well as potentially for assisting administrative decision-makers in reaching conclusions in the longer-term.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Last year, IRCC also revealed it has been using an AI tool since early 2018 to triage a large volume of temporary resident visa applications pouring in from India and China, a project slated to cost the department more than $850,000. <\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;|||||&#8221;]<\/p>\n<h3><span style=\"color: #000080;\"><strong>Garbage in, garbage out<\/strong><\/span><\/h3>\n<p><span style=\"color: #000080;\"><strong><\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">In federal departments, automated technology can provide significant time and cost-saving benefits. IRCC is often beleaguered by application backlog and sometimes boasts processing times that span multiple years. As Molnar notes, however, the significance of immigration and refugee decisions means applications must be handled carefully. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">But instead of providing a solution, artificial intelligence can introduce a serious problem: bias. <\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;0px|||||&#8221; _builder_version=&#8221;3.22.5&#8243; module_alignment=&#8221;center&#8221; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;right&#8221; custom_padding=&#8221;|||8px||&#8221;]<\/p>\n<p><span>In an algorithmic context, bias refers to a simple premise many experts classify as \u201cgarbage in, garbage out\u201d. Essentially, if the quality of data entering an algorithm is poor, it will generate poor results. In 2015, Amazon discovered an AI system it was using to select possible job candidates was <\/span><a href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G\"><span>unintentionally sexist<\/span><\/a><span>. That\u2019s because it was trained on 10 years of resumes from successful applicants \u2013 who mostly turned out to be men. In another example, researchers from MIT and Stanford University <\/span><a href=\"http:\/\/news.mit.edu\/2018\/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212\"><span>found<\/span><\/a><span> that facial recognition software was much more successful at identifying white men than it was black women. A data sample that predominantly consisted of white faces was the culprit.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text quote_border_color=&#8221;#e04b43&#8243; _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; quote_font=&#8221;||||||||&#8221; quote_text_align=&#8221;left&#8221; quote_text_color=&#8221;#0c71c3&#8243; header_font=&#8221;||||||||&#8221; transform_translate=&#8221;5px|27px&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;left&#8221; custom_margin=&#8221;94px|||||&#8221; custom_padding=&#8221;|||||&#8221;]<\/p>\n<blockquote>\n<h2><span style=\"font-weight: 400; color: #000080;\">\u201cI struggle with this presupposition that technology is somehow less biased than humans, because it&#8217;s created by human beings.&#8221;<\/span><\/h2>\n<h2><span style=\"color: #000080;\"><em>\u2013 Petra Molnar<\/em><\/span><\/h2>\n<\/blockquote>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;0px||0px|||&#8221; _builder_version=&#8221;3.22.5&#8243; module_alignment=&#8221;center&#8221; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; max_width=&#8221;1080px&#8221; module_alignment=&#8221;center&#8221;]<\/p>\n<p><span>But humans are biased too, which includes administrative decision-makers who make life-altering calls about who gains entry into Canada. Documented cases of <\/span><a href=\"https:\/\/scc-csc.lexum.com\/scc-csc\/scc-csc\/en\/item\/1717\/index.do\"><span>bias<\/span><\/a><span> or potentially <\/span><a href=\"https:\/\/nationalpost.com\/pmn\/news-pmn\/canada-news-pmn\/letter-to-spouse-applying-for-permanent-residency-offensive-kwan-says\"><span>culturally insensitive<\/span><\/a><span> language can be found in notes from immigration officers, which means this is the sort of data automated systems will rely on to sift through immigration and refugee applications. <\/span><\/p>\n<p><span>\u201cI struggle with this presupposition that technology is somehow less biased than humans, because it&#8217;s created by human beings,\u201d Molnar said of the belief that AI might be more fair than human decision-makers. <\/span><\/p>\n<p>[\/et_pb_text][et_pb_video src=&#8221;https:\/\/youtu.be\/NRDaZ5KAwnc&#8221; _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221;][\/et_pb_video][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">It would be reasonable to assume that Canada has a strict set of regulations to govern the use of such technology, particularly when an automated system\u2019s thought process is not as transparent as a human one. The reality, however, is not so clear.<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;|||||&#8221;]<\/p>\n<h3><span style=\"color: #000080;\"><strong>Who&#8217;s in charge?<\/strong><\/span><\/h3>\n<p><span style=\"color: #000080;\"><strong><\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">In March 2019, then-Treasury Board president Jane Philpott announced the publication of Canada\u2019s <\/span><a href=\"http:\/\/www.tbs-sct.gc.ca\/pol\/doc-eng.aspx?id=32592\"><span style=\"font-weight: 400;\">Directive on Automated Decision-Making<\/span><\/a><span style=\"font-weight: 400;\">, an extensive policy document outlining how government should properly develop and deploy decision-making AI. The release of the directive marked the country\u2019s first step toward monitoring the use of AI in federal departments. But it was published mere hours before Philpott\u2019s resignation from cabinet, so the directive quickly disappeared from public consciousness. \u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">While it may be a relatively unknown piece of policy, it\u2019s a critical one. The directive is a sweeping guide intended to reduce the risk of automated systems. It outlines rules for establishing transparency, like making sure it is clear how systems work and providing explanations for AI-based decisions; testing for unintended biases algorithms may generate on their own; ensuring humans can intervene in a system\u2019s decision-making process when necessary and listing consequences for a failure to comply. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_code _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;]<\/p>\n<div class=\"infogram-embed\" data-id=\"6401ce1f-fec1-4391-9a9f-2a0397a0a63a\" data-type=\"interactive\" data-title=\"Directive on Automated Decision-Making: A timeline\"><\/div>\n<p><script>!function(e,t,s,i){var n=\"InfogramEmbeds\",o=e.getElementsByTagName(\"script\")[0],d=\/^http:\/.test(e.location)?\"http:\":\"https:\";if(\/^\\\/{2}\/.test(i)&&(i=d+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var r=e.createElement(\"script\");r.async=1,r.id=s,r.src=i,o.parentNode.insertBefore(r,o)}}(document,0,\"infogram-async\",\"https:\/\/e.infogram.com\/js\/dist\/embed-loader-min.js\");<\/script><\/p>\n<div style=\"padding:8px 0;font-family:Arial!important;font-size:13px!important;line-height:15px!important;text-align:center;border-top:1px solid #dadada;margin:0 30px\"><a href=\"https:\/\/infogram.com\/6401ce1f-fec1-4391-9a9f-2a0397a0a63a\" style=\"color:#989898!important;text-decoration:none!important;\" target=\"_blank\" rel=\"noopener noreferrer\"><\/a><!\u2013- [et_pb_br_holder] -\u2013><a href=\"https:\/\/infogram.com\" style=\"color:#989898!important;text-decoration:none!important;\" target=\"_blank\" rel=\"nofollow noopener noreferrer\"><\/a><\/div>\n<p>[\/et_pb_code][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">The directive also requires the use of an <\/span><a href=\"https:\/\/canada-ca.github.io\/aia-eia-js\/\"><span style=\"font-weight: 400;\">Algorithmic Impact Assessment<\/span><\/a><span style=\"font-weight: 400;\"> (AIA). Currently in draft form, the AIA is a 57-part questionnaire that assesses the details of a proposed system, like what parts of the system will be controlled by AI, or where the data behind it came from. The system is then assigned one of four impact levels, which correspond to how much the system will impact someone\u2019s life and how reversible the decision it generates is. The level then determines the additional review, testing and transparency requirements needed for the system. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cIt is, in my opinion, a step in the right direction,\u201d said Mirka Snyder Caron, an associate at the Montreal AI Ethics Institute. \u00a0\u201cIt gives a checklist approach, it provides different AI impact assessments varying on the impact on the rights of individuals.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Molnar believes it\u2019s a good step too, and has had positive discussions with Treasury Board about the guidelines. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cAt the same time, it&#8217;s a directive, not legislation,\u201d Molnar cautioned regarding the enforcement of the policy. \u201cThere&#8217;s only so far that it can go.\u201d<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;||0px|||&#8221; custom_margin=&#8221;|||||&#8221; _builder_version=&#8221;3.22.5&#8243; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; transform_translate=&#8221;-4px|-16px&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;|||7px||&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">Molnar was unable to say whether legislation is needed at this point, but other countries have started entering that space. In early April, the U.S. took the plunge by introducing a <\/span><a href=\"https:\/\/www.wyden.senate.gov\/imo\/media\/doc\/Algorithmic%20Accountability%20Act%20of%202019%20Bill%20Text.pdf?utm_campaign=the_algorithm.unpaid.engagement&amp;utm_source=hs_email&amp;utm_medium=email&amp;_hsenc=p2ANqtz-___QLmnG4HQ1A-IfP95UcTpIXuMGTCsRP6yF2OjyXHH-66cuuwpXO5teWKx1dOdk-xB0b9\"><span style=\"font-weight: 400;\">bill<\/span><\/a><span style=\"font-weight: 400;\"> to regulate the use of automated decision-making systems. Named the Algorithmic Accountability Act, the bill involves testing a company\u2019s systems for bias, conducting impact assessments (much like the Canada\u2019s AIA tool) and addressing any red flags arising from these assessments within a specific time period. Across the pond, British parliament established a <\/span><a href=\"https:\/\/www.parliament.uk\/business\/committees\/committees-a-z\/lords-select\/ai-committee\/role\/\"><span style=\"font-weight: 400;\">Committee on Artificial Intelligence<\/span><\/a><span style=\"font-weight: 400;\"> back in 2017 to better understand the economic, social and ethical implications of the technology. <\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_image src=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-content\/uploads\/sites\/20\/2019\/05\/snyder-caron.jpg&#8221; _builder_version=&#8221;3.22.5&#8243; transform_scale=&#8221;96%|96%&#8221; transform_translate=&#8221;-20px|-20px&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;left&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;|||||&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; text_line_height=&#8221;1.4em&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;left&#8221; custom_margin=&#8221;-25px||&#8221;]<\/p>\n<p><em><span>Snyder Caron recently joined the Montreal AI Ethics Institute, a civic engagement group that helps people build AI tools ethically.\u00a0 [Photo \u00a9 Raisa Patel]<\/span><\/em><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;0px|||||&#8221; custom_margin=&#8221;|||||&#8221; _builder_version=&#8221;3.22.5&#8243; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">Aside from Treasury Board, two other bodies handle AI development in Canada: the Canadian Institute for Advanced Research (CIFAR) and the Chief Information Officer Strategy Council (CIO). Neither governs AI, but CIFAR\u2019s $125 million Pan-Canadian Artificial Intelligence Strategy researches the ethical and legal implications of the systems and is training the next generation of AI researchers at its three federally-funded institutes. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cOne of the things that we are dedicated to and focusing on is making sure that as many students as possible get training around the ethical development and application of AI,\u201d said Elissa Strome, the strategy\u2019s executive director. \u201cIt&#8217;s really important to us that not only do they get the technical skills but they also get some understanding of these ethical questions and the other social implications of AI.\u201d <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The CIO Strategy Council, on the other hand, is currently developing a set of standards to shape the private sector\u2019s ethical development of automated decision-making systems. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cThey&#8217;re voluntary standards, but really for a product to be deemed to be following ethical frameworks and ensuring privacy, [the private sector] is going have to be able to demonstrate that they have followed those standards,\u201d Strome explained. <\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; max_width=&#8221;1080px&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221; custom_padding=&#8221;|||||&#8221;]<\/p>\n<h3><span style=\"color: #000080;\"><strong>What next?<\/strong><\/span><\/h3>\n<p><span style=\"color: #000080;\"><strong><\/strong><\/span><\/p>\n<p><span style=\"font-weight: 400;\">As for Canada\u2019s fledgling attempts at regulation, there are still points of weakness. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cThere are concerns that [the directive] won&#8217;t be sufficient for the immigration process. It is a very broad directive and it&#8217;s meant to be applied across various administrative organizations,\u201d said Snyder Caron, referring to potential human rights abuses in the immigration and refugee system.<\/span><\/p>\n<p>\u201cThere are issues which I believe should be tackled specifically for immigration and it would be very nice to see, at least on the first step, specific islands in the context of immigration.\u201d<\/p>\n<p>[\/et_pb_text][et_pb_video src=&#8221;https:\/\/youtu.be\/-zHLAPAASrg&#8221; _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; max_width=&#8221;1080px&#8221; module_alignment=&#8221;center&#8221;][\/et_pb_video][et_pb_text _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">The \u201cspecific islands\u201d Snyder Caron is referring to could become a reality where one area of the directive is concerned: its impact assessments. The draft questionnaire is not highly tailored to different uses of automated system, but Treasury Board hopes to change that.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ashley Casovan, who works within Treasury Board\u2019s Secretariat, said there are possible plans to create \u201cextensions\u201d for AIAs. That would mean making specific assessments for tools in different sectors, like immigration or health. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In terms of bias, Casovan cited the directive\u2019s requirement to further review projects that have a more significant impact on human rights. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cWe want whoever is doing the peer review to do not an analysis of the data, but the data and the potential algorithm, the methodology, in tandem with one another, to ensure that bias would be caught,\u201d Casovan explained.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;0px|||||&#8221; _builder_version=&#8221;3.22.5&#8243;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;right&#8221;]<\/p>\n<p><span>References to automated decision-making have also appeared elsewhere in federal doctrine: Canada\u2019s <a href=\"https:\/\/www.documentcloud.org\/documents\/5983171-Immigration-and-Refugee-Protection-Act-2001.html#annotation\/a498063\">Immigration and Refugee Protection Act<\/a> approves the use of the technology. That\u2019s according to a 2015 amendment buried in the legislation, which was added to permit the use of electronic systems to administer the Act.<\/span><\/p>\n<p><span>\u201cLet&#8217;s say my client comes to see me and says, \u2018What are we going to do about this? How can we challenge it? Under what grounds can we challenge it?\u2019 Then it becomes problematic because do you challenge it under the existing Act?,\u201d Molnar said.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; background_color=&#8221;#223557&#8243; box_shadow_style=&#8221;preset2&#8243; box_shadow_horizontal=&#8221;15px&#8221; box_shadow_vertical=&#8221;15px&#8221; transform_translate=&#8221;-24px|-52px&#8221; width=&#8221;80%&#8221; min_height=&#8221;339px&#8221; custom_margin=&#8221;79px||-2px|||&#8221; custom_padding=&#8221;20px|20px|20px|20px|true|true&#8221;]<\/p>\n<p><span style=\"color: #ffffff;\"><i><span style=\"font-weight: 400;\">\u201cFor greater certainty, an electronic system, including an automated system, may be used by the Minister to make a decision or determination under this Act, or by an officer to make a decision or determination or to proceed with an examination under this Act, if the system is made available to the officer by the Minister.\u201d <\/span><\/i><\/span><\/p>\n<p><span style=\"color: #ffffff;\"><i><span style=\"font-weight: 400;\">\u2013 Se<\/span><\/i><\/span><i style=\"color: #ffffff;\"><span>ction 186<\/span><\/i><i style=\"color: #ffffff;\"><span>e.1, Immigration and Refugee Protection Act<\/span><\/i><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row custom_padding=&#8221;0px|||||&#8221; _builder_version=&#8221;3.22.5&#8243; collapsed=&#8221;off&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_text _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;]<\/p>\n<p><span style=\"font-size: 16px;\">\u201cThat&#8217;s when it gets tricky. What do you bring to court? And how do you make judges understand these intricacies around decision-making when you&#8217;re now dealing with a whole other system of cognition? It gets very complicated, very fast.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Molnar has identified a critical consideration regarding Canada\u2019s push for AI: the fact that resolving automated decision-making disputes will fall to Canada\u2019s legal system. But where the aftermath of AI is concerned, it\u2019s a system that might not be ready.<\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-content\/uploads\/sites\/20\/2019\/05\/molnar-report-close.jpg&#8221; _builder_version=&#8221;3.22.5&#8243; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;|||||&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;3.22.5&#8243; text_font=&#8221;||||||||&#8221; text_line_height=&#8221;1.4em&#8221; width=&#8221;80%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;-25px|||||&#8221;]<\/p>\n<p><em><span style=\"font-weight: 400;\">Molnar holds a personal copy of Bots at the Gate, covered in notes. It\u2019s been seven months since the report\u2019s publication, and the immigration lawyer is still surprised by the ripple effect it created. [Photo \u00a9 Raisa Patel]<\/span><\/em><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;3.22.3&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_button button_url=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/chatbots-at-your-service\/&#8221; button_text=&#8221;\u2190 Chatbots, at your service&#8221; button_alignment=&#8221;left&#8221; _builder_version=&#8221;3.22.5&#8243; animation_style=&#8221;fade&#8221; animation_duration=&#8221;1050ms&#8221; z_index_tablet=&#8221;500&#8243; saved_tabs=&#8221;all&#8221;][\/et_pb_button][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;3.0.47&#8243;][et_pb_button button_url=&#8221;https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/a-legal-system-unprepared\/&#8221; button_text=&#8221;A legal system unprepared \u2192&#8221; button_alignment=&#8221;right&#8221; _builder_version=&#8221;3.22.5&#8243; animation_style=&#8221;fade&#8221; animation_duration=&#8221;1050ms&#8221; z_index_tablet=&#8221;500&#8243; saved_tabs=&#8221;all&#8221;][\/et_pb_button][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class='et-dropcap'>I<\/span>mmigration lawyer Petra Molnar insists she\u2019s not a fearmonger. In September 2018, Molnar and human rights researcher Lex Gill issued a damning 88-page report warning about the use of artificial intelligence in Canada\u2019s immigration and refugee system. The report was published through the University of Toronto\u2019s human rights program and the Citizen Lab and bore [&hellip;]<\/p>\n","protected":false},"author":22,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","ngg_post_thumbnail":0,"footnotes":""},"class_list":["post-161","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.6 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Governing ethical AI - Data and the displaced<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Governing ethical AI - Data and the displaced\" \/>\n<meta property=\"og:description\" content=\"mmigration lawyer Petra Molnar insists she\u2019s not a fearmonger. In September 2018, Molnar and human rights researcher Lex Gill issued a damning 88-page report warning about the use of artificial intelligence in Canada\u2019s immigration and refugee system. The report was published through the University of Toronto\u2019s human rights program and the Citizen Lab and bore [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Data and the displaced\" \/>\n<meta property=\"article:modified_time\" content=\"2020-01-16T00:32:14+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/governing-ethical-ai\\\/\",\"url\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/governing-ethical-ai\\\/\",\"name\":\"Governing ethical AI - Data and the displaced\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/#website\"},\"datePublished\":\"2019-05-02T16:46:42+00:00\",\"dateModified\":\"2020-01-16T00:32:14+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/governing-ethical-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/governing-ethical-ai\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/governing-ethical-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Governing ethical AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/#website\",\"url\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/\",\"name\":\"Data and the displaced\",\"description\":\"How new technology could revolutionize refugee&#039;s journey to safety\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/cusjc.ca\\\/mrp\\\/dataandthedisplaced\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Governing ethical AI - Data and the displaced","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/","og_locale":"en_US","og_type":"article","og_title":"Governing ethical AI - Data and the displaced","og_description":"mmigration lawyer Petra Molnar insists she\u2019s not a fearmonger. In September 2018, Molnar and human rights researcher Lex Gill issued a damning 88-page report warning about the use of artificial intelligence in Canada\u2019s immigration and refugee system. The report was published through the University of Toronto\u2019s human rights program and the Citizen Lab and bore [&hellip;]","og_url":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/","og_site_name":"Data and the displaced","article_modified_time":"2020-01-16T00:32:14+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/","url":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/","name":"Governing ethical AI - Data and the displaced","isPartOf":{"@id":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/#website"},"datePublished":"2019-05-02T16:46:42+00:00","dateModified":"2020-01-16T00:32:14+00:00","breadcrumb":{"@id":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/governing-ethical-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/"},{"@type":"ListItem","position":2,"name":"Governing ethical AI"}]},{"@type":"WebSite","@id":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/#website","url":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/","name":"Data and the displaced","description":"How new technology could revolutionize refugee&#039;s journey to safety","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/pages\/161","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/comments?post=161"}],"version-history":[{"count":0,"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/pages\/161\/revisions"}],"wp:attachment":[{"href":"https:\/\/cusjc.ca\/mrp\/dataandthedisplaced\/wp-json\/wp\/v2\/media?parent=161"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}