Language and Societies

ANT/LIN 5320 at Wayne State University

Arriving at a Cultural Model of Artificial Intelligence

Arriving at a Cultural Model of Artificial Intelligence

Mark Jazayeri

As computer programs’ abilities to process natural language improves, and the technical hurdles of language processing are solved, we are seeing more examples of machines that can communicate with language.   This expanse can be noted in all areas of our culture, from making phone calls by speaking to a car, to making choices in automated phone systems, to performing a web search by speaking to our phones. Products that are being marketed as “socially intelligent agents” (SIA) are becoming more ubiquitous as large companies such as Apple and Google are battling for the mass consumer market. After proposing a common definition of intelligence to start from, I build off of ideas from Luck, d’Inverno, and Dautenhahn to describe what a SIA is. I then discuss the current state and ideology of artificial intelligence (AI) in the mass media to help arrive at what would be expected from an SIA in terms of domain of expertise and discourse. This then helped drive the creation of a cultural model of artificial intelligence, one key element of which was the issue of “believability.” An SIA does not have to actually be human-like in intelligence; it just needs to provide enough social cues and act believably intelligent. This model was then validated by performing discourse analysis of conversations with Apple’s particular offering of an agent, Siri. It was found that when the discourse fit well within this cultural model of AI then the human user categorized Siri as a believable SIA. When the conversation deviated from this model then Siri was categorized as just an object. Ultimately it was found that an AI is expected to be intelligent in its domain, able to communicate clearly, and able to function with minimal supervision. It is not expected to be perfect, but should be graceful and polite in failure. It is also expected to try to interpret our needs and react to them as our agents.

Keywords: artificial intelligence, social agents, autonomous agents, discourse analysis, cultural models

April 6, 2015 - Posted by | abstract

2 Comments »

  1. This sounds very interesting! Are you exploring the uncanny valley hypothesis (where the lifelike threshold is crossed, usually in robots or dolls and they stop being endearing and become frightening to humans) as it relates to social agents. Is there such a thing as too lifelike for agents like Siri?

    Comment by Jaroslava Pallas | April 22, 2015 | Reply

  2. These products are amazing and make our life easier. I am just curious if there is any side effects or repercussion in the long run from depending a whole lot on these technologies such as memory problems, slower cognition, etc. Also, I don’t know if there is any study being held to examine the relation between the wide spread of SIA products that can understand human language, and the rate of unemployment.

    Comment by Suha | April 26, 2015 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: