Baystate Business: Biotech Breakthroughs (Radio)

Need help? Contact us We've detected unusual activity from your computer network To continue, please click the box below to let us know you're not a robot. Why did this

Why Shares of Stoke Therapeutics Plummeted in September

After top performance in August, the biotech was hit hard. What happened Shares of Stoke Therapeutics (NASDAQ:STOK)crashed 42.6% in September, according to data provided by S&P Global Market Intelligence. That was a dramatic

Here’s Why Stealth BioTherapeutics Rocketed Higher Today

The clinical-stage biotech made a new friend. What happened Shares ofStealth BioTherapeutics(NASDAQ:MITO)are up 9.8% at 1:11 p.m. EDT, having been up as much as 35%, following the announcement that the

Here’s Why Seattle Genetics Stock Jumped Higher Today

Solid data for two pipeline candidates increases investors' confidence in the cancer biotech. What happened Shares ofSeattle Genetics(NASDAQ:SGEN)are up 12% at 12:05 p.m. EDT after the biotech presented a pair

Machine behaviour is old wine in new bottles


CORRESPONDENCE

Accenture, San Francisco, California, USA.

Accenture, San Francisco, California, USA.

Smart Information Flow Technologies, Boston, Massachusetts, USA.

University of Oxford, UK.

Google, Mountain View, California, USA.

The call of Iyad Rahwan and colleagues for a science of “machine behaviour” that empirically studies artificial intelligence (AI) “in the wild” (Nature568, 477–486; 2019) is an example of ‘columbusing’. That is, what they claim to have discovered is, in fact, an existing field of study that has been producing vibrant, engaged research for decades. Cybernetics, the science of communications and automatic control systems in machines and living things, has been flourishing since the 1940s.

In our view, this prior art exposes serious ethical and scientific problems with the authors’ proposal. Studying AI agents as if they are animate moves responsibility for the behaviour of machines away from their designers, thereby undermining efforts to establish professional ethics codes for AI practitioners.

The authors’ idea that those who create machine-learning systems and study their behaviour cannot anticipate their “downstream societal effects” is false. Sociologists and anthropologists have long contributed to research on AI. For example, social scientists have described how AI can embed human intentions in material infrastructures (W. E. Bijkeret al.(eds)The Social Construction of Technological Systems; 2012). Most would foresee AI agents’ societal outcomes.

Columbusing fails to give due credit. It rides roughshod over long-fought struggles to centre science and technology’s ethical implications for crucial issues such as inclusivity and diversity. All too often, those struggles have been fought by women and individuals of colour, who have laid much of the overlooked intellectual foundations of their disciplines.

Nature574, 176 (2019)

doi: 10.1038/d41586-019-03002-8

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *