Lewis Bender Started A Company To Cure Cancer

Lewis Bender, CEO of Intensity Therapeutics.Intensity Therapeutics Bender quit his job in 2012 as the well-paid CEO of a biotech company in Boston to launch Intensity Therapeautics and  follow his dream

The Return of M&A Activity Is Lifting These Biotech Stocks Today

A $9.7 billion deal and two failed takeover bids ushered in renewed enthusiasm for acquisitions in the pharmaceutical industry. What makes these stocks buyout candidates? What happened After a brief

Second RNAi Drug Approved: Biotech Investors Should Pay Attention to This New Drug Class

Alnylam Pharmaceuticals is coming of age. It's been a long road for RNA interference (RNAi): From a Nobel prize andoverpriced dealsmore than a decade ago, to the Food and Drug

IBM Research launches explainable AI toolkit

IBM Research today introducedAI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making.

The launch follows IBM’s release a year ago ofAI Fairness 360for the detection and mitigation of bias in AI models.

IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview.

“That’s fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us — not because of our own internal deployments of AI or products that we might have in this space, but it’s fundamentally important to create these capabilities because our clients and the world will leverage them,” she said.

The toolkit is also being shared, Mojsilovic said, because industry progress on the creation of trustworthy AI has been “painfully slow.”

AI Explainability 360 draws on algorithms and papers from IBM Research group members. Source materials include “TED: Teaching AI to Explain Its Decisions,” a paper accepted for publication at the AAAI/ACM conference on AI, ethics, and society, as well as theoften cited“Towards Robust Interpretability with Self-Explaining Neural Networks,” accepted for publication at NeurIPS 2018.

The toolkit draws on a number of different ways to explain outcomes, such as contrastable explanations, an algorithm that attempts to explain important missing information.


Read More

Leave a Reply

Your email address will not be published. Required fields are marked *