In programming new artificial intelligence software, diversity among developers translates into better business outcomes, according to AI experts speaking at this year's CES, the Consumer Technology Association's annual technology and media tradeshow.
That is because without diverse views in development, the technology only learns how to think from a more limited point of view. This can have serious — and sometimes expensive — repercussions as unexpected problems are discovered after the software rolls out, the experts noted.
"The most important thing we all need to remember is that inclusive inputs lead to inclusive outputs," said Annie Jean-Baptiste, head of production inclusion at Google LLC. "It's really important to have perspectives that have been historically underrepresented or at the margins of the creation of AI and make sure we have inputs that actually reflect the diversity of our world today."
Kimberly Sterling, a senior director at medical device and digital health company ResMed, pointed to lessons learned during the pandemic about challenges for diverse populations in healthcare as one example of why more diversity is needed in a number of fields. This is especially true for software developers who are trying to make accurate AI models, she said.
"We are on the precipice of this amazing technology in AI, but if we don't have diverse data sets based on diverse populations really reflective on the heterogeneity in the world, we end up with bad prediction models," Sterling said.
|Participants speaking during the panel on gender and racial bias in AI session at CES 2021.
Taniya Mishra, founder and CEO of AI company SureStart, noted that when she first worked on voice recognition technology in the early 2000s, the most commonly available data sets were samples of "newsreader speech," comprising Caucasians speaking in standard American accents. Due to a lack of foreign accents available, the AI had issues trying to decipher anyone not talking in a very specific way.
"Although data sets in voice recognition systems have made significant strides in the last decade, the issues of not being understood to persist," Mishra said, citing voices of children and the elderly as two common examples.
When asked by the moderator to highlight examples of ways to integrate inclusivity within AI, Jean-Baptiste spoke about the effort that Google put into making sure its AI-powered Google Assistant did not respond to user queries with harmful or biased language.
"We knew there were potential groups that weren't reflected on the product design team that needed to have a seat at the table — thinking about race, gender, ability, age, socio-economic status and sexual orientation," Jean-Baptiste said.
To remedy this, Google brought in employees from underrepresented backgrounds to run the AI through adversarial testing, essentially trying to break the product before it launches, Jean-Baptiste explained. Doing this, the team not only uncovered negative things they did not want the AI to say, but it also proactively added positive cultural references. As a result, the final product received very few inquiries that the company had to act on, Jean-Baptiste said.
"This was a testament to building for everyone and with everyone," she said. "It's important to have as many voices as possible because inclusion really fuels innovation, and if we want to see the best of the best in terms of technology we need to make sure that voices that have been at the margins are really included."
Other coverage from CES 2021:
CES 2021: TV makers turn to 8K, super-sized panels
CES 2021: Samsung unveils new AI-enabled smart home products
CES 2021: Survey predicts consumer tech spending will spike to $461B in 2021
CES 2021: GM says auto sector at 'inflection point' toward zero-emission future
CES 2021: Tech execs see state, global initiatives driving US privacy law
CES 2021: Consumers at the wheel for future of streaming
CES 2021: Best Buy stores will play 'massive role' in fulfillment, CEO says