Executives from 25 tech companies gathered in Shanghai to celebrate the start of trading of China’s new stock board, which launched Monday morning. After a day of trading, it’s clear
IBM Research today introducedAI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making.
The launch follows IBM’s release a year ago ofAI Fairness 360for the detection and mitigation of bias in AI models.
IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview.
“That’s fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us — not because of our own internal deployments of AI or products that we might have in this space, but it’s fundamentally important to create these capabilities because our clients and the world will leverage them,” she said.
The toolkit is also being shared, Mojsilovic said, because industry progress on the creation of trustworthy AI has been “painfully slow.”
AI Explainability 360 draws on algorithms and papers from IBM Research group members. Source materials include “TED: Teaching AI to Explain Its Decisions,” a paper accepted for publication at the AAAI/ACM conference on AI, ethics, and society, as well as theoften cited“Towards Robust Interpretability with Self-Explaining Neural Networks,” accepted for publication at NeurIPS 2018.
The toolkit draws on a number of different ways to explain outcomes, such as contrastable explanations, an algorithm that attempts to explain important missing information.