IP-AI •FEBRUARY 18
What does the future look like for AI?
Caroline Running Wolf (Crow Nation), nee Old Coyote, is an enrolled member of the Apsáalooke Nation (Crow) in Montana, with a Swabian (German) mother and also Pikuni, Oglala, and Ho-Chunk heritage. She attended the March 2019 Indigenous Protocol and Artificial Intelligence workshops in Hawai’i. Here she explores the future of AI.
As a preschooler I was fascinated by my friend’s parents, who have been researching and trying to develop an artificial intelligence for a large company since the 1960’s. Whenever I checked in with them, every decade or so, they laughed it off and confided in me that artificial “intelligence” still had a long way to go to fill the shoes of that label.
Today we have achieved a certain level of (almost) artificial intelligence—for clearly delineated, specific tasks. Much of this is still based on computational pattern recognition through large amounts of data. Machines still can’t learn and infer context like humans can. But humans are the ones programming these machines—and it shows.
On a regular basis reports surface about AI powered software with racial or gender bias. Earlier this month a Twitter user posted a screenshot of a suggested correction by Grammarly, an online grammar and contextual spell checking platform. Grammarly had an issue with an “unusual word pair” and suggested to combine the noun “girl” with an adjective other than “successful,” positing that synonyms like “lucky” or “happy” might be more fitting. Facial recognition software jumps from a 1% error margin for light-skinned males to over 35% for dark-skinned women. Despite the obvious bias in current AI systems, Joy Buolamwini, founder of the Algorithmic Justice League, concludes her February 7, 2019 Time article on a hopeful note:
“I am optimistic that there is still time to shift towards building ethical and inclusive AI systems that respect our human dignity and rights. By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion. In addition to lawmakers, technologists, and researchers, this journey will require storytellers who embrace the search for truth through art and science. Storytelling has the power to shift perspectives, galvanize change, alter damaging patterns, and reaffirm to others that their experiences matter. That’s why art can explore the emotional, societal, and historical connections of algorithmic bias in ways academic papers and statistics cannot. And as long as stories ground our aspirations, challenge harmful assumptions, and ignite change, I remain hopeful.”1
I agree with Joy Buolamwini. Despite currently manifested biases and limitations, the future for AI is still malleable. Our workshop is not a day too early!
Today’s implementations of AI are already very promising. Personally, I am excited about the possibilities of AI, especially what speech recognition, Natural Language Processing (NLP) and chat bots offer for the revitalization of endangered Indigenous
languages. This is the field that I am passionate about and I am willing to recruit the help of any technology available for this goal. I realize that the amount of data needed for NLP to generate speech and interactions for Indigenous languages are a major hurdle—but just imagine the possibilities!
Some endangered Indigenous languages have only a handful of fluent speakers left. These speakers are elderly. Our time with them is limited and we have to use it wisely. We shouldn’t waste their energy and knowledge by making them teach language beginners or having them translate individual words for a dictionary. Technology can assist with these simple tasks. In the future, home assistants could be programmed to recognize and respond in Indigenous languages, allowing language learners to apply and practice their language skills. Real-time translation could translate websites and social media as well as dub TV shows and movies. We could interact with video game characters in our Indigenous language, engaging in human-like conversations. With the help of current and future AI technologies we can build language tools that expand our everyday usage of Indigenous languages.
No technology can replace humans and true human interaction but just like other technologies that came before it, artificial intelligence can change our lives. My hope is that AI will also have a major effect on the reclamation of our Indigenous languages.
References
Buolamwini, J. (2019, February 7). Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it. Time. Retrieved from time.com/5520558/artificial-intelligence-racial-gender-bias.
1. Joy Buolamwini, “Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it,” Time, February 7, 2019 <time.com/5520558/artificial-intelligence-racial-gender-bias>.
Author Bio:
Caroline Running Wolf (Crow Nation), nee Old Coyote, is an enrolled member of the Apsáalooke Nation (Crow) in Montana, with a Swabian (German) mother and also Pikuni, Oglala, and Ho-Chunk heritage. As the daughter of nomadic parents, she grew up between USA, Canada, and Germany. Thanks to her genuine interest in people and their stories, she is a multilingual Cultural Acclimation Artist dedicated to supporting Indigenous language and culture vitality. After working for over 15 years as a professional nerd herder and business consultant in various fields, Running Wolf co-founded a nonprofit, Buffalo Tongue, with her husband, Michael Running Wolf. Together they create virtual and augmented reality experiences to advocate for Native American voices, languages, and cultures. Running Wolf has a Master’s degree in Native American Studies from Montana State University in Bozeman, Montana. She is currently pursuing her PhD in Anthropology at the University of British Columbia in Vancouver, Canada.
The Indigenous Protocols and Artificial Intelligence (IP-AI) workshops are founded by Old Ways, New, and the Initiative for Indigenous Futures. This work is funded by the Canadian Institute for Advanced Research (CIFAR), Old Ways, New, the Social Sciences and Humanities Research Council (SSHRC) and the Concordia University Research Chair in Computational Media and the Indigenous Future Imaginary.
IP-AI •
ʻUmeke kāʻeo: (Re)coding AI to ʻĀina
Joel Davison is a Gadigal and Dunghutti man from Sydney Australia. She attended the March 2019 Indigenous Protocol and Artificial Intelligence workshops in Hawai’i. Here she explores the future of AI.
AI today is bound by practicality, talented developers, cutting edge research, specialised hardware and top of the line cyber security, which are all ingredients required to advance simple AI beyond current offerings. This means that the entities with the power to advance AI, those with access to pools of talent and academic connections as well as the funding for hardware and security, are those which already have much more money to invest than what is required to operate as a business. These entities, be they government or private, expect a return on investment, in this way AI advances will always be pushed in a direction that is either profitable or marketable, due to this AI is entwined with automation in our cultural lexicons and it is this connection that often dominates conversation.
If Artificial Intelligence is to replicate human intelligence, then the most direct way to profit off of said intelligence is to exploit its labor value. In this way conversations are often steered towards analysis of labor-value of existing occupations. For example, advances by large tech companies in self-driving cars has every in-tune truck driver eyeing other industries at this point, and we1 can’t2 stop talking3 about4 it5.
The vast majority of these industry shaping moves that are being made are opportunities presented only to the wealthiest organisations on the planet, due to the benefit only being realised at a huge scale thanks to the costs outlined above, talent, research, hardware and security. It simply isn’t feasible for small organisations, potentially social ventures, NGOs or co-ops, to lay stake to a portion of the market without the network and capability to take advantage of the wider market. If the benefit of Artificial Intelligence in this liberal-capitalist frame is the profit earned by extracting more labor-value by reducing the overhead of hiring humans to manually perform tasks, then by the time you have paid the up-front costs for the research, development and specialised manufacturing to begin providing self-driving vehicles as a service, you start to realise that you need to roll out your service on a massive scale to begin to realise the benefits. In this environment Artificial Intelligence becomes a winner takes all venture, where the only participants are those already winning.
However, we have been seeing a shift in this landscape, a move by some of the largest organisations that changes the climate entirely. Having developed their AI and taking their time to scale and implement before they start to see their benefit, these large organisations have started to look for alternate revenue sources for their AI solutions. Most notable of these alternate revenue sources are the AI as a service platforms, such as IBM’s Watson or Google’s Tensor Flow. Suddenly, small organisations can provide the benefit of AI (or at least market that they do) without the tremendous up-front cost of research and specialised hardware. In this we are now seeing many small businesses and startups getting into the game of exploiting the difference in labor value between human intelligence and Artificial Intelligence, this time opening up smaller scales, nooks and crannies in the marketplace to be explored.
In all of these conversations we are only exploring the capital value of simple Artificial Intelligence: it’s the capitalist equivalent of only talking about the ‘why?’ of AI (the answer to which is almost always ‘money’). Little do we explore the impact of simple Artificial Intelligence, we never really ask ‘how?’, and when we do it’s always too late.
In November 2017, The Guardian broke the story of a secret police blacklist employed by the New South Wales Police,6 a “Suspect Targeting Management Plan”, which the NSW Police Commissioner called a “predictive style of policing”. This is kind of low-hanging fruit isn’t it? My intention was to share a couple of cases where organisations hadn’t stopped to ask ‘how?’, or what their impact is, but surely no one on this program even stopped to ask ‘why?’. It doesn’t take a genius to figure out how this goes terribly wrong, hell you don’t even have to look much further than Marvel, who ran a (fantastic, by the way) crossover event by the title of “Civil War 2” which featured at its center the arguments for and against ‘predictive policing’, it’s actually kind of prophetic and I love it so.
*spoiler warning*
The event comes to boiling point when a new Spiderman, Miles Morales7 (A young African American, Puerto Rican man) is accused of murdering Steve Rogers, Captain America in the future. After all of the superheroes have shared their perspectives and opinions and had their brawls, the takeaway from this is the question, ‘is it ever okay to judge someone for something they haven’t done but could do?’, to which the answer is no, you shouldn’t, especially if the current criminal justice system is suited to it and especially if you don’t think very carefully about it. Unfortunately the Australian criminal justice system isn’t suited to it and very clearly the NSW police did not think very carefully about it.
**spoilers over**
‘Okay Joel so you have some comic-books-based opinions on predictive justice, but seriously how bad could it be?’
It gets pretty bad. According to the NSW Police Commissioner Mick Fuller, “here were about 1,800 people subject to an STMP across the state. About 55% of them were Aboriginal”, the youngest of which is a nine year old. Currently Indigenous Australians only make up 3% of the national population, so how is it that we represent such a large portion of this database? Are we really that talented at crime? I mean, do we really commit 17 times more crime than any other Australian ethnicity? Of course not, that’s ridiculous, so how did this AI come up with this list of suspects? The truth is, we don’t know and if you ask the police they wouldn’t know either, the company that they contracted to develop the solution likely don’t know either and don’t care how, they’ve already answered their ‘why?’ (read: money). Most likely the people developing the solution don’t understand how the AI’s learning algorithms work and didn’t think about the kind of training data the AI was trained on before it started working on production data.
‘But Joel, they’d have to have thought pretty hard if they made the AI racist, it’s a machine so it’s impartial to race and ethnicity’, turns out that’s not the case,8 AI more or less come out of the box as racist. This is due to how AI are configured in these projects, to perform better than humans they need to learn more than humans in the narrow field they’re being developed for, which is one of their strengths: they can take a huge set of training data and learn from it very quickly. The data is important, however, and as it so happens the most easily accessible large datasets are user-generated and contain all of their respective prejudices. So it’s important to ask ‘what data set was it trained on?’, in this case definitely existing data on previous arrests and criminal convictions by the Australian Federal Police. ‘Hold on, the data on previous arrest and criminal convictions by the Australian Federal Police reveals a strong recurring prejudice toward the Indigenous population of Australia?’
Imagine my shock.
So now the police have a racist AI that’s populating a confidential list of suspects who are majority Indigenous, who the police are now legally able to arrest before they commit a crime or do anything suspicious. Yeah, the police in 2017 criminalised being Aboriginal. That’s how bad it gets.
I’d love to say this proves the point I was making earlier about the impacts AI can have if we don’t ask ‘how?’ but it’s even worse than that. The fact of the matter is unless we are very careful, AI-as-a-service can be used to intentionally obfuscate the ‘how?’. We don’t know how the NSW police’s AI became a racist, we can make very good educated guesses about training data and configuration, but we don’t know: the AI obfuscates the process by which it came up with its database through its sheer complexity alone. The biggest problem is that in spite of this, the results are still being used with authority. Because it is an AI, a machine that ‘just runs analysis’ all it is doing is giving authority to existing and past prejudices and perpetuating said prejudices, rather than having the ability to challenge them like a human might.
We haven’t been asking of ourselves ‘how?’ and when we don’t, we don’t move forward, we don’t challenge and we don’t change. We just become more efficient and I don’t think that’s the vision anyone who is passionate about AI & Computer Science imagine. If we are to use AI to move our society forward, to make real change instead of just making profit, we need to ask ‘how?’.
References
Clevenger, S. (2019, February 13). Self-driving truck startups TuSimple, Ike attract more investment to fuel development. Transport Topics. Retrieved from ttnews.com/articles/self-driving-truck-startups-tusimple-ike-attract-more-investment-fuel-development.
McGowan, M. (2017, November 10). The Guardian. Retrieved from theguardian.com/australia-news/2017/nov/11/more-than-50-of-those-on-secretive-nsw-police-blacklist-are-aboriginal.
Miles Morales (Earth-1610) [online wiki page]. (n.d.). Marvel Database Fandom Wiki. Retrieved from https://marvel.fandom.com/wiki/Miles_Morales_(Earth-1610).
Murphy, F. (2017, November 17). Truck drivers like me will soon be replaced by automation. You’re next. The Guardian. Retrieved from theguardian.com/commentisfree/2017/nov/17/truck-drivers-automation-tesla-elon-musk.
[Online article]. (2019, February 24). Retrieved from pressreviewer.com/2019/02/24/the-leading-companies-competing-in-the-global-mining-truck-market-industry-forecast-2018-2022/.
Orenstein, W. (2019, February 1). Automated ‘platoons’ of trucks might soon be driving on Minnesota roads. MinnPost. Retrieved from minnpost.com/good-jobs/2019/02/automated-platoons-of-trucks-might-soon-be-driving-on-minnesota-roads/.
Speer, R. (2017, July 13). How to make a racist AI without really trying [Blog post]. Retreived from blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai-without-really-trying/.
Rowe, A. (2018, August 30). The trucking industry’s future: go high tech or go home. Tech.Co. Retrieved from tech.co/news/trucking-industry-future-autonomous-drivers-vr-2018-08.
Welch, D., Coppola, G., & Dawson, C. (2019, February 24). Young CEO of electric vehicle startup Rivian has Amazon riding shotgun. Seattle Times. Retrieved from seattletimes.com/business/young-ceo-of-electric-vehicle-startup-rivian-has-amazon-riding-shotgun/.
1. Walker Orenstein, “Automated ‘platoons’ of trucks might soon be driving on Minnesota roads,” MinnPost, February 1, 2019 <minnpost.com/good-jobs/2019/02/automated-platoons-of-trucks-might-soon-be-driving-on-minnesota-roads/>.
2. Seth Clevenger, “Self-driving truck startups TuSimple, Ike attract more investment to fuel development,” Transport Topics, February 13, 2019 <ttnews.com/articles/self-driving-truck-startups-tusimple-ike-attract-more-investment-fuel-development>.
3. Adam Rowe, “The trucking industry’s future: go high tech or go home,” Tech.Co, August 30, 2018 <tech.co/news/trucking-industry-future-autonomous-drivers-vr-2018-08>.
4. David Welch, Gabrielle Coppola, & Chester Dawson, “Young CEO of electric vehicle startup Rivian has Amazon riding shotgun,” Seattle Times, February 24, 2019 <seattletimes.com/business/young-ceo-of-electric-vehicle-startup-rivian-has-amazon-riding-shotgun/>.
5. Finn Murphy, “Truck drivers like me will soon be replaced by automation. You’re next,” The Guardian, November 17, 2017 <theguardian.com/commentisfree/2017/nov/17/truck-drivers-automation-tesla-elon-musk>.
6.Michael McGowan, “More than 50% of those on secretive NSW police blacklist are Aboriginal,” The Guardian, November 10, 2017 <theguardian.com/australia-news/2017/nov/11/more-than-50-of-those-on-secretive-nsw-police-blacklist-are-aboriginal>.
7. “Miles Morales (Earth-1610),” <marvel.fandom.com/wiki/Miles_Morales_(Earth-1610)>.
8. Robyn Speer, “How to make a racist AI without really trying,” July 13, 2017 <blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai-without-really-trying>.
Author Bio:
Caroline Running Wolf (Crow Nation), nee Old Coyote, is an enrolled member of the Apsáalooke Nation (Crow) in Montana, with a Swabian (German) mother and also Pikuni, Oglala, and Ho-Chunk heritage. As the daughter of nomadic parents, she grew up between USA, Canada, and Germany. Thanks to her genuine interest in people and their stories, she is a multilingual Cultural Acclimation Artist dedicated to supporting Indigenous language and culture vitality. After working for over 15 years as a professional nerd herder and business consultant in various fields, Running Wolf co-founded a nonprofit, Buffalo Tongue, with her husband, Michael Running Wolf. Together they create virtual and augmented reality experiences to advocate for Native American voices, languages, and cultures. Running Wolf has a Master’s degree in Native American Studies from Montana State University in Bozeman, Montana. She is currently pursuing her PhD in Anthropology at the University of British Columbia in Vancouver, Canada.
The Indigenous Protocols and Artificial Intelligence (IP-AI) workshops are founded by Old Ways, New, and the Initiative for Indigenous Futures. This work is funded by the Canadian Institute for Advanced Research (CIFAR), Old Ways, New, the Social Sciences and Humanities Research Council (SSHRC) and the Concordia University Research Chair in Computational Media and the Indigenous Future Imaginary.