IP-AI • JANUARY 31, 2019
Looking back to the future of AI
Maroussia Lévesque is an attorney and researcher with a background in interactive media. She attended the March 2019 Indigenous Protocol and Artificial Intelligence workshops in Hawai’i. Here she explores the future of AI.
In short, it looks like the past—unless we do something about it.
First, a definition. AI is an umbrella term that means different things to different people. My work focuses on machine and deep learning, because I think those are the technologies most conducive to a paradigm shift. I’ll spare you the platitudes about AI’s potential transformative effects, but it is worth noting that deep learning, especially in its unstructured form, can detect patterns in large data sets in a way humans can’t. I’ll let my comp sci colleagues unpack—or debate—this assertion.
Back to my point about the past:
Machine and deep learning systems feed on existing data. Unchecked, they tend to reproduce and amplify existing bias. The most concerning examples sit in the criminal justice system, from predictive policing to bail determinations. Note that the latter uses a crude statistical analysis rather than complex deep learning system, but the argument stands: considering ‘criminality’ factors without a critical understanding of the racial and socio-economic constructs biasing the data perpetuates inequality.
Computer science has a major white guy problem. It’s important to acknowledge laudable initiatives to organize POC, non-binary and other folks, but generally AI is still designed by people who are the norm. A case in point is the lower accuracy of facial recognition systems on black and brown faces, especially women’s. Similarly, might a diverse team prevented the gorilla mishap? To note: the company simply deleted the gorilla search results rather than address the problem. There’s an interesting tangential discussion about when (in)visibility is power, depending on whether AI is used in repressive contexts or to provide services. Spoiler alert: marginalized communities are overrepresented in law enforcement datasets due to over-policing. If we want AI to stop replaying the same scenario, it’s time to flip the script and get a diversity of people involved upstream. Caveat: I’m also conscious/weary of the limitations of positionality, i.e. demanding that the token representative of XYZ bear the burden of defending a whole community. I think it’s everyone’s job, particularly those who are more privileged—a burden of proof of sorts.
If systems are imposed top-down, marginalized/disenfranchised communities will continue to be the testbeds for oppressive practices. See Virginia Eubank’s excellent case studies in the US context. More broadly, AI meshes with surveillance practices in a way that challenges both domestic and international protections on privacy.
Who’s Doing What
The private sector drives AI development. While some companies have called for hard regulations or international treaties, the overwhelming majority lobby for soft ethical standards. Some see corporate social responsibility as a form of ethics-washing. Compromises might be regulatory sandboxes, and technical standards. [Disclosure: I’m part of the IEEE standard on algorithmic bias.]
Governments are also grappling with this new reality. AI-facilitated election meddling was a wake up call for many. How should nations leverage AI’s economic potential, while respecting their human rights engagements? A fair criticism would be that (a) most don’t and (b) human rights are a Western construct further perpetuating oppression. At any rate, we’ve seen several nations and regional alliances lead consultations and issue AI strategies to hedge against perceived future risk and seek leadership in what some have called the new space race.
Ways Forward
What about people? I’ve already alluded to informal alliances of AI workers. Another thread is the #TechWontBuildIt phenomenon. While it is not limited to AI projects, the movement opposes the use of technology for immoral purposes, and most of the actual technology involves AI. For example, Amazon employees denounced the use of their facial recognition tech and cloud computing platform in support of state surveillance and immigration deportation. There’s a longer discussion to be had about the potential and limitations of Valley engineers to make these kinds of decisions, but there is at least some evidence of wider coalition building with existing forms of activism.
One thing that troubles me very much is that these conversations are largely taking place without the people primarily impacted by these technologies. I’ve had the honor of getting a glimpse of the fierce work of the Stop LAPD Spying Coalition based in Skid Row. LA is ground zero for predictive policing, and its affected communities have organized a formidable, smart response to tech-facilitated surveillance and data analytics. Coalition work is hard. It requires patience, compromise, and humility. The group must wait for everyone to be caught up and on board before it moves forward. But when it does, it speaks with a thousand voices.
I want to leave us on two more positive notes. First, art has the power to interrogate AI the way policy, law or computer science can’t. I particularly enjoy the work of Trevor Paglen, and I hope you will too. Back to the idea that AI is a social construct, it is largely shaped uniformly through Western concepts and values. From Estonian folklore to Innu grammar and Japan’s Shinto tradition, some concepts are making their way into AI discussions. I look forward to meeting you all and learning about what your perspectives might be.
References
#TechWontBuildIt [Twitter hashtag]. (n.d.). Twitter. Retrieved November 11, 2019, from twitter.com/hashtag/TechWontBuildIt.
Algorithmic Bias Working Group. (2017). P7003 - Algorithmic Bias Considerations. IEEE Standards Association. Retrieved from standards.ieee.org/project/7003.html.
Conger, K. (2018, June 21). Amazon workers demand Jeff Bezos cancel face recognition contracts with law enforcement. Gizmodo. Retrieved from gizmodo.com/amazon-workers-demand-jeff-bezos-cancel-face-recognitio-1827037509.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.
Gender Shades. (n.d.). Retrieved November 11, 2019, from gendershades.org.
Google apologizes after app mistakenly labels Black people ‘gorillas’ [online article]. (2015, July 3). CBC News. Retrieved from cbc.ca/news/trending/google-photos-black-people-gorillas-1.3135754.
Google. (January 2019). Perspectives on issues in AI governance [PDF document]. Google AI. Retrieved from ai.google/perspectives-on-issues-in-AI-governance.
Heller, N. (2017, December 11). Estonia, the digital republic. The New Yorker. Retrieved from newyorker.com/magazine/2017/12/18/estonia-the-digital-republic.
Hu, C. (2017, October 22). A MacArthur ‘genius’ unearthed the secret images that AI uses to make sense of us. Quartz. Retrieved from qz.com/1103545/macarthur-genius-trevor-paglen-reveals-what-ai-sees-in-the-human-world.
Kesserwan, K. (2018, February 16). Indigenous conceptions of what is human, of what has a spirit and what doesn’t, offer a different way of considering AI - and how we relate to each other. Policy Options. Retrieved from policyoptions.irpp.org/magazines/february-2018/how-can-indigenous-knowledge-shape-our-view-of-ai.
Kimura, T. (2017, January 30). Robotics and AI in the sociology of religion: A human in imago roboticae. Social Compass 64(1). Retrieved from doi.org/10.1177/0037768616683326.
Simonite, T. (2018, January 11). When it comes to gorillas, Google Photos remains blind. Wired. Retrieved from wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind.
Stop LAPD Spying Coalition. (n.d.). Retrieved November 11, 2019, from stoplapdspying.org.
Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics-shopping? The Privacy and Sustainable Computing Lab. Retrieved from privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-as-an-Escape-from-Regulation_2018_BW9.pdf.
Author Bio:
Maroussia Lévesque is an attorney and researcher with a background in interactive media. She consults for governments, private sectors, and NGOs about the legal and policy implications of emerging technologies. She was the Conceptual Lead at Obx Labs for Experimental Media during her B.F.A in Computation Arts at Concordia University, and researched IP issues at the Center for Genomics and Policy during her B.C.L./LL.B. law degrees from McGill. Maroussia was involved in the Quebec inquiry commission on the electronic surveillance of journalists, and drafted a foreign policy pertaining to AI and human rights for the Digital Inclusion Lab at Global Affairs Canada. She is a member of the Institute of Electrical and Electronics Engineers working group on algorithmic bias and speaks about law in digital spaces in contexts ranging from informal privacy workshops to international conferences and peer-reviewed journals.
The Indigenous Protocols and Artificial Intelligence (IP-AI) workshops are founded by Old Ways, New, and the Initiative for Indigenous Futures. This work is funded by the Canadian Institute for Advanced Research (CIFAR), Old Ways, New, the Social Sciences and Humanities Research Council (SSHRC) and the Concordia University Research Chair in Computational Media and the Indigenous Future Imaginary.
How do we Indigenously Interact with AI?