IP-AI • FEBRUARY 18, 2020
What does the future look like for AI?
Merry Coleman received her BA (Hons) in English literature from the University of Winchester. She attended the March 2019 Indigenous Protocol and Artificial Intelligence workshops in Hawai’i. Here she explores the future of AI.
Artificial Intelligence already surrounds so much of what we do in our day to day lives—for example self-service scanners in supermarkets were posited as a ‘creepy futuristic machines’ when they were first introduced in the mid-noughties, yet these are now a much-appreciated convenience for shoppers, and asking Siri or Alexa rather than typing a question into Google has become second nature to many. Shaving a few seconds from one’s day has become preferable in many cases to maintaining our privacy, willingly giving our precise location and other personal details to companies such as Google, Facebook and Uber in the name of convenience.
We are already living in the future, in some respect, as much of our technological progress becomes focussed on refining what we have already created—although perhaps this is a naïve view from someone who can’t picture how different the future may really look. Today’s world looks vastly different from the world of the 1990s, for example, except that we still use much of the same technology. Might it be the case that twenty years from now, artificial intelligence and technology are aesthetically very different, yet their function remains similar? Might we be using the same basic technology for brain surgery that we’ve used for years, while the success rates and accuracy of the technology dramatically improve?
In Kate Darling’s TED talk on our emotional connection to robots,¹ she raised questions about why, as humans, we seem to feel emotion for certain technology as though it were alive. I think this is important when considering where the future of AI will take us, in particular as Darling raises issues of what happens when humans are unable to disconnect from technology emotionally. It may be the case that the more specialised and progressive our technologies become, the less we are able to separate ourselves from them emotionally. Darling spoke specifically about robots being used to clear minefields, and other army robots even being given funerals when they were “killed”. In light of this week’s news that the Mars rover “Oppy” Opportunity has ‘died’, this emotional connection seems to have really hit home, as we have seen the direct impact that an emotional connection with robots and technology can have.
But perhaps this is a good thing. Does this not show us that humans are not so desensitised to violence and destruction, to the degree that we will mourn for something that is not even alive? Darling’s talk highlights for me how humans are still very much in touch with our emotions, and we seem to be a long way off being made robotic ourselves in our inability to care. One of the greatest worries for the upcoming generations is that an increasing demand for artificial intelligence will result in humans being less reliant on other human company, as the need to communicate with one another is stripped away by technology. Darling’s research suggests that this is not the case, at least not yet, as our ability to empathise still outweighs the abilities of the technology we have created. While it remains true that the technology that exists today is capable of doing terrible things, it simultaneously seems that to most people, improving on technology is largely for positive progress. Yes, artificial intelligence is reducing our need for learning certain skills (think being able to have food delivered through our phones and the internet, rather than learning to cook for ourselves), yet these same technologies can help us to learn skills we might not otherwise have the opportunity to explore—for example devices such as Alexa and the Google home hub being able to use the internet to create walkthrough instructions for people to learn as they go. I mentioned earlier that people are becoming increasingly fond of convenience, and it seems to be the case that the progression of technology and artificial intelligence is most appreciated when it allows the user to add a level of convenience to their lives, rather than having our lives be taken over by the reach of artificial intelligence. In particular, technology has practical uses in the disabled community, from screen readers for accessing social media, to the specialised treatment of disease. Being able to harness new technologies to aid specific groups opens doors for creating a more accessible society for all.
Overall, it seems that the future of AI is incredibly bright, with new technologies being produced on a near-constant basis. While popular culture increasingly prophesises how artificial intelligence will be used for the downfall of civilisation (dramatic, but perhaps not too hyperbolic), with the likes of Elon Musk becoming the comedy villains of our real-life superhero movie, it seems that we are far from being taken over by a robot race. It is inevitable that artificial intelligence will become a much larger part of everyday life in the coming years, however this does not need to be “the escalator from hell,” as Jack Clark, the head of policy at OpenAI worries that their latest AI technology will become if released to the public. These concerns surrounding AI are not entirely without reason, with privacy and data breaches being front and centre of many news stories in recent months, however it seems to be the case for now that much of the technology for now is being used for public good—even if vast quantities of personal data are being stored by corporations. It is difficult to say whether AI will ultimately have a wholly positive or negative impact on society, since so much of the technology is being created and worked with while not necessarily being fully understood. We are at a point in history where science is progressing at an incredibly fast pace, with new concepts being realised constantly, as predicted in the 1960s by Gordon Moore. Working with such technology means that fundamentally, we are not fully equipped to deal with the full extent of its capabilities. The coming years are likely to bring a massive change in how we interact with the world around us, as well as with one another, and may exact immense social change around the globe on a much larger scale. It is impossible to say whether Jack Clark’s concerns or Kate Darling’s optimism for the future of AI and technology will become the realised state, but with the rate of progression it seems sensible to accept that either approach is a distinct possibility for our future.
Darling, K. (2018, September). Why We Have an Emotional Connection to Robots [Video file]. TED. Retrieved from ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots.
Hern, A. (2019, February 14). New AI Fake Text Generator ay be too Dangerous to Release, say Creators. The Guardian. Retrieved from theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction.
1. Kate Darling, “Why we have an emotional connection to robots,” TED, September 2018, <ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots>.
Merry Coleman received her BA (Hons) in English literature from the University of Winchester. She is an aspiring writer and has a deep-rooted interest in anthropology and sociology, but a lesser grasp of AI and technology studies. Coleman hopes that being involved in this project will help her to gain insight into a different area of academia - one that she have observed from a young age, through her family upbringing and overlaps with degree subjects.
The Indigenous Protocols and Artificial Intelligence (IP-AI) workshops are founded by Old Ways, New, and the Initiative for Indigenous Futures. This work is funded by the Canadian Institute for Advanced Research (CIFAR), Old Ways, New, the Social Sciences and Humanities Research Council (SSHRC) and the Concordia University Research Chair in Computational Media and the Indigenous Future Imaginary.
How do we Indigenously Interact with AI?