latest-news-headlines Market Intelligence /marketintelligence/en/news-insights/latest-news-headlines/microsoft-exec-ai-can-be-risky-due-to-biased-inaccurate-data-51644764 content esgSubNav
In This List

Microsoft exec: AI can be risky due to biased, inaccurate data


LCD Monthly Newsletter: September 2021


Industries Most and Least Impacted by COVID-19: A Market-Implied Probability of Default Perspective


Top 100 Banks: Capital Ratios Show Resilience to the Pandemic


Investment Banking Essentials Newsletter: October Edition

Microsoft exec: AI can be risky due to biased, inaccurate data

➤ Artificial intelligence tools can be risky depending on the accuracy of the data fed.

Businesses using AI will need to be prepared to explain inconsistencies to regulators.

Due to their decision-making abilities, humans will always be needed when AI tools are used.

Microsoft Research Asia, the research arm of Microsoft Corp. in the Asia Pacific region, was founded in 1998 in Beijing. With more than 200 scientists and 300 visiting scholars and students, one of its focuses has been artificial intelligence.

However, when it comes to AI application, the technology can be risky, Jason Tsao, Greater China AI and Area Transformation Lead at Microsoft, tells S&P Global Market Intelligence, adding that companies need to make sure AI technology can be explained to regulators in the event of an inconsistency.

The following is an edited conversation with Jason Tsao.

S&P Global Market Intelligence: What is Microsoft Research working on in the AI field?

SNL Image

Jason Tsao, Greater China AI and Area Transformation Lead, Microsoft

Jason Tsao: Microsoft Research, where a lot of leading researchers of Chinese AI companies including SenseTime group Ltd and Alibaba Group Holding Ltd. are from, has been working on simulating human capabilities in language processing and image recognition. However, there are still risks related to what type of data is used and its accuracy. For example, if a listed company was using AI to produce a financial statement, the U.S. regulator can challenge the company if anything goes wrong. The company will need to explain how the AI system produced wrong numbers. This is a technological challenge for us.

When is data biased?

The data used to train machines can be biased. The New York Times reported earlier that some AI systems can recognize Caucasian males better than African American females. This is mainly because researchers tend to use more Caucasian data and more male data to train machines. Secondly, researchers may not be completely honest with users when deploying the technologies.

When is data inaccurate?

It can be risky if data is gathered for background purposes from facial recognition technology, like in airports, and the technology mistakenly assigns someone to a wrong identity. That person will not know. Microsoft Research is working on this issue.

How does Microsoft Research solve problems like this?

We have created an internal committee to help us address AI-related ethical problems such as privacy and security.

Do you think AI will replace humans in the future?

AI cannot replace us in the foreseeable future. We have AI tools to produce summaries of research papers for developers, for example. However, it is people who pick the papers they think are most important. It is humans that analyze and make decisions.