Artificial intelligence (AI) is steadily embedding itself into our lives. Whether it’s the dawn of autonomous vehicles, the use of Alexa in our homes, facial recognition on our handheld devices, or simply how our email accounts filter out spam, harnessing AI systems that act without human interference is something we will have to just get used to.
Understandably, though, the whole idea of AI is still difficult to fully grasp for many. You only have to watch the science fiction movie I, Robot – which sees increasingly conscious robots rising up to take power from humans in 2035 – to get an idea of the human apprehension towards handing over control to intelligent machines.
“The robots are going to take over!” people warn. The public lack of understanding of what AI actually is could also be a factor in how people feel toward it, which is why building the technology with transparency in mind is important.
“The public lack of understanding of what AI actually is could also be a factor in how people feel toward it”
Binary District Journal spoke with Spiros Margaris, senior advisor and founder of Margaris Ventures, and Isaac Bang, project lead at Mind AI, an artificial intelligence engine.
With AI capabilities advancing rapidly, there is an increasing need for people to understand how a system works and reaches the decisions it reaches so that they trust it. This is why it’s vital that AI is transparent.
Transparency in AI is critical for humanity, says Bang. For him, AI, specifically artificial general intelligence, will be the most powerful invention of our time. “Therefore,” he says, “transparency in AI is necessary in both the development and utilisation of AI tools that will be used to impact human lives, directly or indirectly.”
Margaris agrees, adding that there is an “absolute need” for all those involved to understand how and why a model reaches a certain decision.
“The general process of developing a ‘black box’ model stops when the model repeatedly has a high level of accuracy, but this doesn’t mean it’ll continue to be accurate forever,” he says.
“Very often, the bias in an AI system starts to show after a system has been put into production, and when the system isn’t transparent it’s impossible to understand what caused the bias and thus very difficult to correct it.”
It is these neural network-based AI systems – based on biological neural networks that make up human brains – that are known as black boxes. In layman’s terms, this means that a neural network will provide an answer or approximation in response to a prompt, but studying its structure won’t provide any clear insight as to why a system behaved in the way that it did.
“It’s scary to imagine the adoption of flawed AI systems being implemented in various aspects of our lives, such as law enforcement, healthcare, or financial services”
“Currently, when a black box AI system goes bad, the data scientists feed it more data in the hope of correcting it,” he adds. “A developer who knows what their algorithms do and the key data that helps an AI model produce an output can spot and correct the problem at its source, rather than blindly and randomly pouring in more data.”
Bang agrees, claiming that most of the ones we see today are only as good as the data being fed to them. “We’ve seen time and again how human bias, skewed training data, and algorithmic flaws negatively impact the end result of various AI applications.
“It’s scary to imagine the adoption of flawed AI systems being implemented in various aspects of our lives, such as law enforcement, healthcare, or financial services. People might be unfairly classified as a potential criminal or be given a higher insurance premium because of a flawed facial recognition system or a biased prediction algorithm.” An example of this bias can be seen in an AI recruitment tool that was employed by Amazon, which favoured white males, according to a 2018 report from Reuters.
Can AI Developers Be Transparent?
Of course, that’s not to say that steps aren’t being taken to improve transparency in AI.
For instance, a report from VentureBeat last month found that Amazon was teaming up with the National Science Foundation (NSF) to reduce bias and tackle the issues of transparency and accountability in AI. To this end, the pair had committed up to $10 million in research grants over the next three years.
But is it possible for AI developers to be transparent? “Yes and no,” says Bang. While developers can be transparent on how their algorithms are written and what training data they are using, he says, it’s not likely to happen because most AI software is proprietary technology for businesses.
Margaris believes it to be less a matter of capability and more a matter of will. “Is there a will to make systems transparent, provided this means more time to develop, more research, and generally more time?” he questions. “The Silicon Valley dogma seems to say no, but I believe that evolution and penetration of AI in society and technology will make it a necessity.”
This then raises the question that if businesses don’t know how AI works or the reasons behind how and why it reaches a certain decision, can they be justified in selling products affected by the results?
Living in the 21st century, you’d think that companies would have moved on from selling magic potions to the public. You would expect that they would know full well how the systems work.
However, it appears that this is not always the case. According to a report from MMC Ventures, The State of AI: Divergence 2019, only 40% of businesses that claim to use AI really do. Equally surprisingly, of that 40%, a majority don’t know how their systems work.
“40% of businesses that claim to use AI really do… of that 40%, a majority don’t know how their systems work”
If we use the example of a person’s bank account using AI to track fraudulent activity, or a recommendation engine that selects TV programs someone is likely to be interested in based on their preferences, it probably doesn’t matter to people what’s happening behind the scenes as long as the AI performs accurately and doesn’t harm anyone in the process.
In other cases, though, such as when human welfare is on the line, it becomes a bit more complicated. “Hypothetically, let’s say that there is an AI system that can take personalised health data of individuals and create a personalised medication and treatment plan for patients with a specific disease,” says Bang. “This AI is so effective that 95% of the time the patients are fully healed, but 5% of the time inaccuracies lead to patient death.”
In this example, if businesses aren’t able to understand what’s happening in the black box, then one could argue that they have no justification in selling this system since it can lead to a loss of human life.
“However, others might argue that even if we have no idea what’s happening under the hood, a 95% accuracy is worth the risk,” Bang continues. For him, there is no right or wrong answer – it simply becomes a question of ethics.
What Does Regulation Say About It?
At present, it is thought that the only regulation that directly affects transparency in AI is the EU’s General Data Protection Regulation (GDPR).
According to the legislation, if an individual’s data is used in automated profiling or decision making, such as to accept or reject a loan application, information regarding the input data related to the applicant, along with the parameters of the algorithm used for decision making, may be requested.
GDPR does not mean full transparency around the actual algorithm or source code of the AI system. As both Bang and Margaris point out, though, leaders from all around the world are getting together to discuss the future of the technology, brainstorming guidelines, possible regulations and other measures to ensure that AI is used ethically.
“One prominent example is the UN ITU’s AI for Good Global Summit, which is a United Nations platform for global and inclusive dialogue on AI,” Bang says. “In addition, the European Commission just published a set of guidelines for ethical AI development that includes transparency as one of the key points in the set of guidelines.”
Improving Dialogue Between Developers and People
Although AI has been around in some form or another for decades, not many people realise that they use it on a daily basis. As a result, the average person may think of AI as that cliche robot plotting to take over the world. It also doesn’t help, Bang notes, that people read a new article every other week about AI and robots replacing human workers in the near future.
“While it is important for developers to create a better dialogue with people regarding AI and its benefits, it’s even more important to show people with real-world applications that impact their everyday lives,” Bang says. “With the integration of AI tools in everyday lives, people will begin to rely on and appreciate rather than fear it.”
Not All AI is Equal
In Bang’s opinion, the question of what AI actually is goes much deeper than originally thought. Decades ago, when AI was first coined as a term, compiling a program was considered an AI problem.
Fast-forward to today and there are systems that can drive cars, paint an art piece (which sold for half a million dollars), and even debate with humans on complex topics. Even though these systems are considered as artificial intelligence, are they actually intelligent?
“The systems do not understand what they are doing. A self-driving car system doesn’t actually understand that it is driving a car”
“The ‘narrow’ field of AI, which refers to systems that can perform extremely well at one job, gives examples of superhuman abilities,” he says. “We’ve seen these narrow AI systems beat grandmasters at Go, and outperform human doctors in detecting tumours from medical scans.”
At the same time, however, these same systems can fail, for example by misidentifying objects in images after a few adjustments that wouldn’t trick the human mind.
“The reason for the failure is because the systems do not understand what they are doing,” Bang concludes.
“A self-driving car system doesn’t actually understand that it is driving a car. But in tackling understanding, we must first answer the question, ‘what is understanding?’ And perhaps this is partially where transparency matters. Only when a system can be explained, we might understand understanding itself.”
The important thing for anyone worried about the seemingly impending AI apocalypse is that we are still some distance away from truly intelligent machines. Yes, capabilities are becoming powerful and impressive – driving cars, solving equations, playing board games and pulling conclusions from data are all things that artificial intelligence is already arguably superior to humans in doing.
This does not mean these systems are ready to rise up and displace us, though. A machine being able to beat the world’s best player at Go does not mean it has the capability to think for itself or achieve anything close to sentience. Driving cars essentially comes down to a series of sensors and deep programming, not an innate understanding of what it means to drive and be safe while doing so.
Computers go wrong when fed with bad data – as in the case of Microsoft’s failed Tay bot – but they are not themselves then ‘bad’. They are simply reflecting the data fed to them. Ability is not intelligence, so anyone concerned that the singularity is just around the corner is getting ahead of themselves.
Illustrations by Kseniya Forbender
To contact the editor responsible for this story:
Margarita Khartanovich at [email protected]narydistrict.com
- How Blockchain Can Reshape Charitable Donations
- Blockchain’s Scaling Crises: Can Sidechains Be A Potential Solution?
- Hacking Blockchain: Is it Really Secure?
- Regional Strengths Are Shaping AI’s Evolution in Asia
- Credit Card vs. Bitcoin: How Do You Pay for Your Coffee?
- Do You Trust AI? This Is What You Must Understand to Do So