"It's not her fault!" Trusting Alexa, Not Amazon

Date Published: 06.12.2022

Separating Alexa from its creator company through anthropomorphism.

Voice assistants: they’re small, affordable, and ready to answer at your beck and call. The most famous of them, Alexa, comes with Amazon’s Echo and Dot devices. Alexa will play music and fetch trivia from Google, all in response to voice commands. Unsure what to wear? Ask “Alexa, what’s the weather?” and you’ll know if it’s a jumper day, all without any screen taps or swipes.

While Alexa isn’t the only voice assistant on the market, Prof Ekaterina Hertog and Ms Elizabeth Fetterolf picked Amazon’s product for their scientific scrutiny. “They’ve been really active and pushy”, they said, describing Amazon’s promotion of Alexa. Being pushy has yielded results: 200 million Alexa devices have been sold worldwide.₁ But it’s not Alexa herself that Elizabeth and Ekaterina are studying, it’s our relationship with her (or with it).

"They've been really active and pushy."

Large technology companies like Amazon are widely distrusted. Lawsuits centring on Alexa “listening”, reports of employees secretly transcribing recordings, and discrepancies between Amazon’s policies and their actual practices, suggest the distrust is well-founded. But if distrust of Amazon is growing, why are more and more people bringing Alexa – a device that gathers data as it’s being used – into their homes?

This contrast between distrust of companies alongside a willingness to give them personal data has been called “the privacy paradox”. Ekaterina and Elizabeth want to understand how Alexa users cope with this contrast. How do they juggle their embrace of Alexa with their distrust of Amazon? To find out, they interviewed 16 Alexa users in-depth, then scoured their responses for patterns. What they found offers new explanations of the privacy paradox.

The Privacy Paradox

There are three main theories about the privacy paradox. The first is that the users of this tech just aren’t concerned about data privacy, or don’t understand enough about it to appreciate the risks they face. But Ekaterina’s and Elizabeth’s participants weren’t nearly so blithe. “Many participants spoke with absolute certainty of Amazon 'listening' through Alexa,” they shared. Participants displayed a “general pessimism” around the safety of their personal data.

The second is that users appreciate the risks, but they balance them against the good things they gain from the tech and decide the good outweighs the bad. E.g., so what if Alexa gathers my data so long as I get hands-free weather updates? But there was very little evidence from the interviews that participants were thinking like this.

"It's owned, bought, sold, traded, bid for."

The third is that users feel “digital resignation” – a kind of hopelessness in which privacy concerns are acknowledged but the risks feel unavoidable or inevitable. This was definitely present in the interviews even if not the whole story. All participants but one disapproved of Amazon collecting data but saw no other option. “It’s everywhere,” one participant said about data collection, “it’s owned, bought, sold, traded, bid for. That seems very obvious.” Many participants felt their use of smartphones already put them at risk.

New Findings

But some findings went beyond theories of ignorance, trade-offs, or resignation. One psychological mechanism at work seems to be the separation of Alexa from its creator company through anthropomorphism. “All participants anthropomorphized Alexa to some extent”, the team reported. This included using gendered pronouns, and attributing human traits, actions, and intentions to Alexa. As one participant said, “she’s snooty.” Not only was Alexa humanised, but participants seemed to have a “relationship” with her. “I secretly love her,” one said, “she’ll say, ‘have a good day’… sometimes I will say, ‘you too, Alexa’.” Of course, Amazon’s own marketing of Alexa’s human, relatable and mostly feminine persona reinforces this.

"...we hate Jeff Bezos, but we separate Alexa from Jeff Bezos."

Participants did also use mechanical, impersonal language for Alexa. But there was a split. Mostly, anthropomorphisms were invoked when speaking positively of Alexa. Mechanical language was used when expressing concerns over privacy. And with the latter, Alexa was often framed as the unwitting tool of the true nefarious agent, Amazon. Rather than Alexa listening in, “they” – Amazon – were doing the listening through her. “… [I]n this household we hate Jeff Bezos, but we separate Alexa from Jeff Bezos”, one participant explained. “It’s not her fault.”

So what explains the privacy paradox? Ekaterina’s and Elizabeth’s research suggests that in the case of Alexa, users can create just enough psychological distance between “her” and Amazon to trust one and not the other. By separating the two, users can avoid the psychological tension of both trusting and distrusting the same thing. Alexa and Amazon are no longer seen as the same thing. Of course, digital resignation plays a role too. And in some circumstance, users are willing to put boundaries on Alexa. E.g., some don’t want Alexa listening in the bathroom.

What none of these strategies do, however, is pressure Amazon to actually become trustworthy. As Ekaterina and Elizabeth put it, “they will not provide meaningful answers to the problems of surveillance capitalism; rather, they will simply continue to facilitate Amazon’s rapid and undeterred expansion into intimate life.” In the duo's view, solutions will likely not come from the individual level but rather at the level of the data ecosystem itself.

Endnotes

₁ Rubin BF (2020) Amazon sees Alexa devices more than double in just one year. CNET, 6 January. Available at: https://www.cnet.com/home/smart-home/amazon-sees-alexa-devices-more-than-double-in-just-one-year/ (accessed 13 November 2022).

More information

Prof Ekaterina Hertog is Associate Professor in AI and Society with the Oxford Internet Institute and Institute for Ethics in AI, and a Fellow at Wadham College

Elizabeth Fetterolf holds an MSc in Sociology from the University of Oxford and currently serves as the Research Assistant for the Social Media Collective at Microsoft Research.

This article is based on their research paper available in full at the link below:

More like this

View all

'No' to ban on killer robots

Lethal Autonomous Weapon Systems (LAWS) should be regulated rather than banned says Wadham Fellow Tom Simpson in a new Policy Memo for the Blavatnik School of Government.

Find out more

OBE for Honorary Fellow

A Wadham Honorary Fellow has been honoured with an OBE for services to artificial intelligence research.

Find out more