Monday, April 19
News Aggregation Center | Social Listening Post
Like Haha Love Sad Angry
 
Kritik
  • Home
  • About Us
  • Categories
    • Arts and Literature
    • Banking and Financial
    • Business Services
    • Campaign
    • Consumer Products
    • Current Affairs
    • Economics
    • Education
    • Entertainment
    • Food and Beverage
    • Healthcare
    • International Affairs
    • Kritik’s Newsletters
    • Letters
    • Lifestyle
    • Opinion
    • Personal Development
    • Politics
    • Public Services
    • Science & Technology
    • Service Providers
    • Social Issues
    • Sports
    • Tourism and Hospitality
    • Transportation
    • Travel and Leisure
    • Utilities
    • Others
  • Hot Topics
  • Video
  • Contact Us
  • Feedback
  • FAQ?
user avatar
Register Log In

AI is hurting people of color and the poor. Experts want to fix that

Krit@@dm1N kritik
3 years ago
318 views

And so it is with artificial intelligence, which could fundamentally change the world while contributing to greater racial bias and exclusion.

Much of the focus on any downsides of artificial intelligence has been on things like crashing self-driving cars and the rise of machines that kill. Or, as CNN commentator Van Jones put it at a discussion on the topic last week, “What about Terminator?”

But many of the researchers behind this technology say it could pose a greater threat to society by adversely impacting the the poor, the disenfranchised, and people of color.

“Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict,” said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. “AI is no exception.”

These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products. And it’s the reason self-driving cars can drive themselves.

One part of AI is machine learning, in which a system analyzes massive amounts of data to make decisions and recognize pattens on its own. And that data must be carefully considered so that it doesn’t reflect or contribute to existing biases.

“In AI development, we say garbage in, garbage out,” Li said. “If our data we’re starting with is biased, our decision coming out of it is biased.”

We’ve already seen examples of this. A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans.

Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring.

Many of those who gathered at last week’s discussion, “AI Summit – Designing a Future for All,” said new industry standards, a code of conduct, greater diversity among the engineers and computer scientists developing AI, and even regulation would go a long way toward minimizing these biases.

Technical approaches can help, too. The Fairness Tool, developed by Accenture, scours data sets to find any biases and correct problematic models.

“One naive way people were thinking about removing bias in algorithms is just, ‘Oh, I don’t include gender in my models, it’s fine. I don’t include age. I don’t include race,'” said Rumman Chowdhury, who helped develop the tool. But biases aren’t created solely feeding a facial recognition algorithm a diet of white faces.

“Every social scientist knows that variables are interrelated,” said. “In the US for example, zip code [is] highly related to income, highly related to race. Profession [is] highly related to gender. Whether or not that’s the world you want to be in that is the world we are in.”

Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking will go a long way toward addressing the problem, too. This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used.

“We need technologists who understand history, who understand economics, who are in conversations with philosophers,” said Marina Gorbis, executive director of the Institute for the Future. “We need to have this conversation because our technologists are no longer just developing apps, they’re developing political and economic systems.”

Those conversations, she said, are essential to ensuring AI does more good than harm.

Source: https://money.cnn.com/2018/07/23/technology/ai-bias-future/index.html

Categories: Science and Technology Social Issues
Post reactions
Like (10)
Haha (0)
Love (1)
Sad (1)
Angry (0)
Related Posts
critic; critique; kritik, newsfeed

10Gbps FTTH, as seen on Italian TV

3 weeks ago
critic; critique; kritik, newsfeed

Scientists saw chemicals they have never seen in humans

4 weeks ago
critic; critique; kritik, newsfeed

Gates, Bezos investing in fusion startup

2 months ago
critic; critique; kritik, newsfeed

IBM details quantum computing plans

2 months ago
critic; critique; kritik; newsfeed

Researchers develop sustainable styrofoam-alternative

3 months ago
critic; critique; kritik; newsfeed

Marrying big data and high-performance computing

3 months ago
Recent Posts
  • Why good CEOs aren’t afraid to ask questions
  • COVID-19 vaccination campaign – is it really safe?
  • How did Bill Gates run Microsoft? Here’s a look
  • The right way to monitor your digital marketing
  • Lisbon to roll out fleet of electric ferries
Recent Comments
    Archives
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    Categories
    • Advanced Technology
    • Agriculture
    • Arts & Literature
    • Banking and Financial
    • Business Services
    • Campaign
    • Consumer Products
    • Corporate News
    • Current Affairs
    • Customer Service
    • Cybersecurity
    • Digital Marketing
    • Economics
    • Editorial
    • Education
    • Entertainment
    • Entrepreneurship
    • Food and Beverage
    • Gig Economy
    • Healthcare
    • Human Resource
    • Humor
    • International Affairs
    • Investment
    • Leadership
    • Letters
    • Lifestyle
    • Opinion
    • Others
    • Personal Development
    • Politics
    • Productivity Tools
    • Public Services
    • Sales and Marketing
    • Scandals
    • Science and Technology
    • Service Providers
    • Smart City
    • Social Issues
    • Sports
    • Startup
    • Strategic Management
    • Tourism and Hospitality
    • Transportation
    • Travel and Leisure
    • Utilities
    Login with your Social Account
    Kritik © 2017-2019 ZOHL Web Services | All Rights Reserved.
    A Division of ZOHL Industries Sdn Bhd (351827-A)
    • Home
    • About Us
    • Content Policy
    • Privacy Policy
    • Terms of Use
    • Disclaimer
    • Contact Us
    • Feed

    This site is best viewed with 1400x900 resolution (all browsers) and with IE 10.0 version only