Arriving at a Cultural Model of Artificial Intelligence
Arriving at a Cultural Model of Artificial Intelligence
Mark Jazayeri
As computer programs’ abilities to process natural language improves, and the technical hurdles of language processing are solved, we are seeing more examples of machines that can communicate with language. This expanse can be noted in all areas of our culture, from making phone calls by speaking to a car, to making choices in automated phone systems, to performing a web search by speaking to our phones. Products that are being marketed as “socially intelligent agents” (SIA) are becoming more ubiquitous as large companies such as Apple and Google are battling for the mass consumer market. After proposing a common definition of intelligence to start from, I build off of ideas from Luck, d’Inverno, and Dautenhahn to describe what a SIA is. I then discuss the current state and ideology of artificial intelligence (AI) in the mass media to help arrive at what would be expected from an SIA in terms of domain of expertise and discourse. This then helped drive the creation of a cultural model of artificial intelligence, one key element of which was the issue of “believability.” An SIA does not have to actually be human-like in intelligence; it just needs to provide enough social cues and act believably intelligent. This model was then validated by performing discourse analysis of conversations with Apple’s particular offering of an agent, Siri. It was found that when the discourse fit well within this cultural model of AI then the human user categorized Siri as a believable SIA. When the conversation deviated from this model then Siri was categorized as just an object. Ultimately it was found that an AI is expected to be intelligent in its domain, able to communicate clearly, and able to function with minimal supervision. It is not expected to be perfect, but should be graceful and polite in failure. It is also expected to try to interpret our needs and react to them as our agents.
Keywords: artificial intelligence, social agents, autonomous agents, discourse analysis, cultural models
2 Comments »
Leave a Reply
-
Archives
- April 2022 (20)
- April 2021 (14)
- April 2020 (22)
- April 2019 (15)
- April 2018 (15)
- April 2017 (25)
- April 2016 (22)
- April 2015 (30)
- April 2014 (19)
- April 2013 (23)
- April 2012 (15)
- April 2011 (19)
-
Categories
-
RSS
Entries RSS
Comments RSS
This sounds very interesting! Are you exploring the uncanny valley hypothesis (where the lifelike threshold is crossed, usually in robots or dolls and they stop being endearing and become frightening to humans) as it relates to social agents. Is there such a thing as too lifelike for agents like Siri?
These products are amazing and make our life easier. I am just curious if there is any side effects or repercussion in the long run from depending a whole lot on these technologies such as memory problems, slower cognition, etc. Also, I don’t know if there is any study being held to examine the relation between the wide spread of SIA products that can understand human language, and the rate of unemployment.