The bots are not as fair-minded as they seem

Physics

Artificial intelligence (AI) technologies are designed to replicate human capabilities, and in some cases improve upon them. Lifelike robots are physical examples of AI technology, but it is the digital AI systems that already have a ubiquitous influence on our daily lives – from facial recognition software to decision-making tools used by banks, recruiters and the police. Too often, these systems can reflect preexisting social inequalities.

In this episode of the Physics World Stories podcast Andrew Glester investigates the ethical issues that can plague AI and machine learning technologies. He finds out about the concepts of deep learning and neural networks, why these systems can amplify problems in society, and who are the people adversely affected by these flaws.

It turns out that the physics community is part of the problem and potentially part of the solution. Directly and indirectly, physicists are involved in developing AI technology so are ideally placed to raise awareness of the issues. Featuring in the episode:

  • Alan Winfield, a robot ethics researcher at the University of the West of England
  • Julianna Photopoulos, a science writer based in Bristol, UK
  • Savannah Thais, an experimental particle physicist at Princeton University, US

To find out more about the issue of bias in AI systems, take a look at this feature article by Photopoulos, which is summarised in the video below.

Products You May Like

Articles You May Like

Apple’s latest accessibility features are for those with limb and vocal differences
Analysis: DoD space budget ‘clear winner’ in 2022 proposal
Mysterious fast radio bursts come in two distinct flavours
NASA Shares Spectacular Image of Galaxies Merging 140 Million Light-Years From Earth
Beachgoers Reported Being Stung by Hundreds of Jellyfish Washed Ashore in Southend

Leave a Reply

Your email address will not be published. Required fields are marked *