By Guru Banavar
The early days of artificial intelligence (AI) have been met with public hand-wringing. Well-respected technologists and business leaders have voiced their concerns over the (responsible) development of AI. And Hollywood’s appetite for dystopian AI narratives appears to be bottomless.
We’ve never known a technology with more potential to benefit society than artificial intelligence. We now have AI systems that learn from vast amounts of complex information and turn it all into actionable insight. It’s not unreasonable to expect that within this growing body of digital data—2.5 exabytes every day—lie the secrets to defeating cancer, reversing climate change or managing the global economy.
However, if we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we first need to trust it.
We trust things that behave as we expect them to. But that doesn’t mean that time alone will solve the problem of trusting artificial intelligence. AI systems must be built to operate in trust-based partnerships with people.
Our most urgent work is to recognize and minimize bias, which could be introduced into an AI system through training data. The curated data that is used to train AI systems could have inherent biases, e.g., toward a specific demographic, either because the data itself is skewed, or because the human curators displayed bias in their choices.
Managing bias is an element of the larger issue of algorithmic accountability. That is, AI systems must be able to explain how and why they arrived at a particular conclusion so that a human can evaluate the system’s rationale.
In addition, AI systems can and should have mechanisms to insert a variety of ethical values appropriate to the context. This isn’t as difficult as it sounds. Ethical systems are built around rules, just like computer algorithms. These rules can be inserted during development, deployment or use.
It’s incumbent upon the developers of AI systems to answer these questions in a way that satisfies both the industry and the public. This is already well understood throughout the technology industry, which is why IBM is working together with some of its fiercest competitors—including Google, Microsoft and Facebook —on the “Partnership on AI,” an open collaboration designed to guide the ethical development of artificial intelligence.
Guru Banavar is the chief science officer of cognitive computing at IBM.