Microsoft Cortana Research: Could Negative Perceptions of AI Harm Its Development?
Artificial Intelligence will either transform our lives for the better or turn against us, take all our jobs and leave the world a wasteland. It depends who you believe.
It may sound trivial and those working in the technology may laugh at some of the unrealistic, paranoid fantasies portrayed in cinema, but public perception of technology matters. A positive impression is vital to driving investment and consumer demand. Without which, an emerging technology will likely shrivel and die.
Binary District Journal spoke to Mouni Reddy, a founding member of the Microsoft Cortana team. Since his work developing one of the most popular AI consumer products to date, Reddy has gone on to develop AI infrastructure for autonomous car manufacturer Faraday Future.
We spoke to him about the current public perception of AI, whether it was justified, and how AI developers could influence it for the better.
Mouni Reddy. Source: Microsoft
The Voice-Activated Onset of AI
AI is often portrayed through voice in sci-fi films – for example, ‘Mother’ from ‘Alien’, and ‘HAL’ from ‘2001: A Space Odyssey’. Both are sentient digital helpers, but, in the case of HAL, also a malignant force.
Voice assistants such as Alexa, Cortana, Siri and Google Home are also arguably the most recognisable examples of AI in our day-to-day life. High adoption of these suggests that portrayals of voice AI as cold and calculating hasn’t soured popular opinion.
Reddy agrees, noting that their ubiquity is actually helping to improve the public’s perception of AI. “I think public perception of AI is positive, but it is definitely heavily inclined towards AI being Siri, Alexa or Google Home.”
“Even my peers who don’t work closely with AI, most of them think of AI as the voice system - there are so many other things you can do – self-driving cars, and so on”
“People are buying them by the six-pack – and they actually sell them that way now,” Reddy continues. “That has influenced in a positive way [the understanding of] what AI is and what it can do.”
Crucially, it’s not just those buying smart speakers that are reducing AI down to ‘voice assistant’ in the mind of the public. “Even my peers who are product managers who don’t work closely with AI, most of them think of AI as the voice system that we’re building,” Reddy says.
“But if you actually see what our larger team builds, the voice assistant is just 10% of it. There are so many other things you can do – self-driving cars, and so on. And the search results on your phone or PC – those algorithms are moving more and more towards AI, but people don't think about them that way.”
Natural Suspicion of the Unknown
Reddy identifies two camps when it comes to AI: “The general perception is either Skynet, or ‘Oh, Alexa’s very useful for me.’ But I do believe there’s a certain level of discomfort when it works too well – a fear of what it will turn into.”
Funnily enough, fears about AI’s future are influenced by concerns over security and privacy that existed long before the technology was finding its way into our homes. They focus on the human agencies that often draw suspicion from the general public.
“More people are afraid of the National Security Agency looking through their webcams... than AI taking over – people are blocking their webcams, but not unplugging their microphones”
“Currently, I doubt anybody is blocking or even unplugging their Alexa or Google Home just because they’ve read an article saying, ‘Alexa accidentally uploaded your voice data without consent’,” Reddy says. “If anything, more people are afraid of the National Security Agency looking through their webcams. I think that’s a bigger fear than AI taking over – people are blocking their webcams, but not unplugging their microphones.”
AI Taking Jobs
Potential job losses is one of the primary causes of public suspicion towards AI development.
Reddy acknowledges that there is palpable apprehension about AI superseding employees in a number of industries, refusing to say that AI will never be in a position to replace human workers. Rather, he recalled an AI Summit in San Francisco in September, at which Benedict Evans, a partner at Andreessen Horowitz, showed the audience a painting that depicted now-obsolete industry practices as a way of demonstrating the history of technological obsolescence.
“AI is a new technology, but it’s not the only technology that has replaced jobs. That’s progress”
“There are certain things you can’t stop,” Reddy said. “You have to learn to adapt – all those jobs were gone, but those people probably picked up new skills and created new kinds of jobs. We’re not still crying over all the jobs that were lost in the Industrial Revolution, when automation and machines came in.
“People have to realise that AI is a new technology, but it’s not the only technology that has replaced jobs. That’s progress – when you’re automating things that don’t require a lot of skill, you’re automatically forcing people to upgrade. There are certain things I think would make society better, like taking away certain mundane tasks.”
Will There Be a Backlash?
While the removal of mundane tasks for the general progression of society is a logical rationale, convincing those who are made redundant in the immediate aftermath may be a more difficult sell.
“I wouldn’t say there will be backlash,” Reddy said, “but in the end, some CFO is going to make a call on, ‘Should I have 5,000 people enter invoices manually or a machine that can quickly read receipts and enter them in half the time and with a lot more accuracy?’”
“One thing is very unique to AI is it can do this thing called ‘confidence-based escalation”
Company executives will inevitably be sold on the benefits of AI automation. The execution of this infrastructural overhaul, though, needn’t end with employees coming into work and finding their desk replaced by an unfeeling server. In reality, the implementation of AI will be more of a gradual process that requires the oversight and even direct input of human workers, in a symbiotic relationship.
“One thing is very unique to AI is it can do this thing called ‘confidence-based escalation’,” Reddy explains. “Take a task like reading a receipt or undertaking a visual inspection – it looks at it, and if it’s not confident, if it knows ‘I need a higher order of thinking for this’, it can escalate it to the human, but then it can also learn that scenario.
“Overnight, they’re not going to replace us, but the number of times this escalation happens will reduce and, in the process, people will have to find opportunities where they can add value.”
The Role of Developers in Facilitating Change
The potential for job creation as AI technology is adopted in different industries may be there, but that doesn’t necessarily mean trust in AI will come easily. In many cases, forecasts for specific AI-integration scenarios are just that – forecasts. Aside from this, AI is such a diverse technology that the issue of public trust goes far beyond the fear of job obsolescence.
Smart speakers and virtual assistants may have captured a large market share, but their rapid incursion into homes around the world is fertile ground for consumer uncertainty regarding the levels to which they have to share personal information.
There have been calls for the developers of these AI platforms to consider the methods in which they communicate their work with the market – from the press to the general customer. Establishing a level of transparency and fostering open channels of communication, to assuage privacy concerns from the top down, may be key to maintaining public trust in AI.
“Cortana didn’t even take the email out of the device. When we explained to the press how we were doing that, we could see they felt more comfortable”
“This is a fair ask,” Reddy says. “I remember when we were at Microsoft and we came out with Cortana, Cortana would read your emails and give you suggestions about when your flight was leaving. We had to handle that very carefully, not only with our customers, but also with the press, and also give them some insight into how much we cared about the data.”
Unfortunately, there’s often assumed similarity when it comes to AI-related consumer products. Big developers such as Google and Microsoft may offer near-identical services in many instances, but the levels of user data they utilise non-locally may be entirely different.
“Cortana, for a very long time, didn’t even take the email out of the device, so that meant if you got an email in Outlook, the model was completely local and only ran on your inbox,” Reddy explains. “When we explained to the press how we were doing that, we could see they felt more comfortable. Google was the only one doing that at the time, the email parsing – everything was on the server, and then there were certain things where they did cross the line.”
Users Aren’t Perfect
It’s important to remember that the general public is not currently demanding transparency and accountability for data. At a base level, terms-and-conditions checkboxes and consent go ignored by vast majority of people simply because users prize speed and convenience over privacy. This means there is a limit to how big an impact being transparent can have.
“For them, it was like, ‘Whatever – take my data’, even though there had been an attempt to explain”
Reddy tells us that his days working on Cortana involved trying to introduce easier access points for users to obtain further information on how their data was being used.
“Most people either didn’t read about what was going on – for them, it was like, ‘Whatever – take my data’, even though there had been an attempt to explain. I thought the press had heard, but not the general public, so, still today, I keep thinking back to that point and how we can do that better,” he says.
So, Can Further Regulation Help?
For many, legislative oversight will be one of the most important tools for instilling trust in AI. With his current work for Faraday Future, Reddy is in an ideal position to see the onset of smart, autonomous electric vehicles, and the legislation being brought in to govern their use. “I think the US government is actually taking a bold step – I’m really positive about how they’re dealing with this,” says Reddy.
Autonomous cars are one avenue of emerging tech being effectively legislated by the government, but that doesn’t translate to other areas, including aspects of AI. “I think they’re still trying to figure it out, in many cases,” he adds.
“In terms of regulation, it's required, particularly when it comes to who owns the customer’s data. Right now, it’s completely left to the companies”
“In terms of regulation, I do believe it's required, particularly when it comes to who owns the customer’s data. Right now, it’s completely left to the companies and some are taking a very strong ethical stance and even going beyond the regulation, but not all.
“I don’t know if it aligns with that American spirit of ‘Oh, don’t block innovation by really regulating’ – that’s a big thing for a lot of people, that’s fundamentally American, letting the free market decide what’s right. In my opinion, however, there should be some level of regulation.”
Reddy explains that he conducted an experiment on his own personal use of the Google ecosystem.
“I turned off all my data access to the Google ecosystem for a little bit, just to see how Google plays with me,” he says. “I pay for YouTube Red and I was a little tired of all the targeted advertisements – I was like ‘No, I don't want a new toothbrush!’ So, I thought ‘Let me get off the grid.’ If you open Google, Chrome, Google Home, every time you’re prompted to turn on all these tracking settings, like your personal data, and I thought, ‘OK, I’ll try to live without this.’
“Then I realised Google Home had completely stopped working – I couldn’t play music, I could ask some questions, but I couldn’t even play music if I didn’t give web-activity tracking. And it wasn’t for only one device – I had to turn it back on for all Google services. I wrote to them with some feedback, saying ‘I almost feel violated now – I pay you guys for other services, actually, so I feel like a paying customer.’”
These are the sorts of instances that will turn the public against AI. Google’s ecosystem is comprehensive - if users feel violated by that it will affect their perspective on all AI projects. Google is already woven deeply within our everyday digital lives, and it’s easy to see how people might find its services an overreach, then directing suspicion towards other developers’ products and services.
“I don’t think people have something to hide – it’s not about that,” Reddy says. “I don’t care if Sundar Pichai is reading all my emails, but I think the principle here is this: if I don’t wanna be targeted by advertisers, I don’t wanna be targeted by advertisers. The option offered is very unfair – it almost feels like a non-option.”
And the Solution?
As Reddy points out, the Cortana team sharing the details of the platform’s data use didn’t necessarily reach consumers, but the press did take notice. In reality, addressing the public at large and delivering a unified message that will permeate all markets around the world is fanciful. It’s only made more difficult when you consider the lack of interest many users have when it comes to developing more of an understanding of data use and AI.
Creating transparent dialogue between developers and the press may be a more tenable solution to the problem of maintaining public trust. Using the media as a conduit for the dissemination of insight will mean these announcements are delivered by mouthpieces with which the public is familiar, and may even trust.
There have already been problems caused by the fact that home assistants are always listening, but they are less dark than you might imagine. In January 2017, a six-year-old in the US was able to order a dollhouse worth $160 just by asking Alexa to do so. When the dollhouse, and four pounds of sugar cookies arrived at the home, neither parent could explain why it had been ordered.
Days later, when the story was picked up by the news, one anchor on Californian television channel CW-6 said “I love the little girl saying, ‘Alexa, order me a dollhouse” when reporting the story. This allegedly alerted devices in homes across the country, with Alexa responding to the TV and users rushing to cancel orders before they were confirmed.
Amazon responded by saying that the devices may have been awoken, but an actual order is almost impossible without human confirmation. Even so, the story served as a reminder of the devices’ always-on eagerness.
Illustrations by Kseniya Forbender
To contact the editor responsible for this story:
Margarita Khartanovich at [email protected]
- How Blockchain Can Reshape Charitable Donations
- Blockchain’s Scaling Crises: Can Sidechains Be A Potential Solution?
- Hacking Blockchain: Is it Really Secure?
- Regional Strengths Are Shaping AI’s Evolution in Asia
- Credit Card vs. Bitcoin: How Do You Pay for Your Coffee?
- Do You Trust AI? This Is What You Must Understand to Do So