What will it take to remove sexism from AI?

Despite the fact that they are artificial beings, their female voices and subservient nature have led many to believe that the AI industry suffers from a classic case of sexism.Siri, Alexa, Amelia, Amy, and Cortana all have something in common. They may have been created by different companies, but they all share an identity: the female chatbot, or artificial intelligence.

These AI systems may have had an innocent origin—aimed at helping users answer questions or monitor their life—but these chatbots have been increasingly targeted by X-rated inquiries. Despite the fact that they are artificial beings, their female voices and subservient nature have led many to believe that the AI industry suffers from a classic case of sexism.

But it’s not just in the use of these applications where sexism rears its ugly head. Similar to any other technology industry, the artificial intelligence industry is full of unnoticed bias by its mostly-male creators. And as the industry continues to grow, so do the problems.

How can sexism in the industry be countered and avoided? First through acknowledging problems, then by a change in the viewpoints that are brought to the table.

As much as the current AI workforce may try, their efforts in creating a less problematic system will not be solved until they give women and others a voice at the table.

Let’s dive into the inherent sexism of AI and how it can adapt and improve moving forward.

The Many Shades of Sexism

Although AI chatbots are made out to be hyper-intelligent applications, their attitudes and inherent need to serve raise many questions among female developers and critics. Do they need to be gendered? Do they need to be portrayed by smiling, blonde, female faces?

Not all the sexism in AI is obvious. There are many ways that the AI industry has flourished through traditional gender roles and female submissiveness. Having an intelligent female assistant at your every beck and call brings to mind the image of female secretaries in Mad Men-esque environments.

“The stereotypically ladylike, deferential [sic] responses of so many virtual assistants reinforce society’s subconscious link between women and servitude,” said Kerry Davis, a writer for Engadget.

It is also worth mentioning the amount of sexually-charged messages these chatbots and AIs receive due to their apparent female identity. Davis included a list of the most common responses from these AIs when asked risque questions.

“Ask it to marry you and Alexa will say, ‘Sorry, I’m not the marrying type’ or ‘let’s just be friends’ to date requests. If you ask Siri ‘Who’s your daddy?’ it will answer ‘You are…’ before asking to get back to work. Microsoft’s Cortana sassily replies, ‘Of all the questions you could have asked,’ to come-ons, something feminists will tell you makes the bot complacent in its harassment. Kai [a non-gendered bank AI], on the other hand, will tell users via text to stop bothering it or say it’s time to get back to banking.”

Yet the sexism comes from a genre that is predominantly occupied by men. Women are rarely promoted to leadership positions in Silicon Valley, and the wide gender-pay gap leaves many women struggling to leave a mark. On top of this, education is often skewed to benefit traditional male teaching styles.

“If those teaching computers to act like humans are only men, there is a strong likelihood that the resulting products will be gender biased,” said Dr. Ileana Stiglani, assistant professor of design and innovation at London’s Imperial College Business School.

A general lack of diversity behind the creation of these AIs has led to our modern problem.These AI systems may have had an innocent origin—aimed at helping users answer questions or monitor their life—but these chatbots have been increasingly targeted by X-rated inquiries.

Linear Teaching Leads to Linear Thought
The focus of AI research tends to lean heavily on algorithms and solving problems while overlooking the bigger picture. As is reflected in many classrooms, where STEM learning is crucial but often biased, there is a lack of focus on accessibility when creating artificial intelligences or robots.

STEM and diversity often go hand in hand, as diverse voices help lead to greater innovative leaps. But when the efforts to teach STEM learning are often angled only to benefit the mathematically-minded students, others that don’t share the mindset are often left out.

Instead, students that have brilliant ideas about real-world applications are turned off by the increased emphasis on programming.

Without a broader look at universal access and the benefit it could provide a community, the programs that teach and develop AI often discourages many women—who might focus on the community aspect of AI—to apply their talents in other fields.

Despite the fact that women’s involvement in technology is currently being emphasized by developers and educators, there has been a dramatic drop off in female technology students since 2000.

“In general, many women are driven by the desire to do work that benefits their communities […]” wrote Sarah Todd of Quartz. Men tend to be more interested in questions about algorithms and mathematical properties.

Since men have come to dominate AI, Marie desJardins, a professor of computer science at the University of Maryland, Baltimore County said “research has become very narrowly focused on solving technical problems and not on the big questions.”

When developers are forced to look outside their own experiences, they may be surprised at the impact their creations may have: whether positive or negative.

An example of this is in the large, clunky AI-powered robots that are created without consideration for the disabled, children, or smaller women. There is no consideration for the “human aspect” of interacting with these AI creations, outside of the (predominantly white) male perspective.

The focus of AI research tends to lean heavily on algorithms and solving problems while overlooking the bigger picture.Diversity in AI
Yet there are plenty of applications in the real world that AI could help alleviate. Not just in a personal, one-on-one scenario, but also on a broader, more cultural level.

Already, AI is being used in disaster response and management by analyzing tweets and other online social media channels to help determine where assistance is needed most in emergency situations.

After the 2015 Nepal earthquake, for example, researchers were able to determine just how helpful their new AIDR (Artificial Intelligence for Disaster Response) really was to real world applications after successfully deploying volunteers to areas that were tagged on social media as needing priority assistance.

However, despite the predominance of males in the industry, it’s not completely void of female voices. There are women who have worked to create gentler robots in factory lines that can communicate and learn from their human coworkers, or women that have created AI interfaces that focus on humor and fostering relationships.

But women’s voices aren’t the only ones that are scarce in the industry. Diversity in general—racial, non-able-bodied, and more—is needed to help meet the needs of everyone.

If AI is truly to be successful, it needs to be considerate of everyone’s needs: creating a diverse board of developers can make that happen. The only way to achieve that is through diverse collaboration, changing emphasis in education, and providing a window for innovation through different mindsets.

Will the image of AI change moving forward? Developers are beginning to the notice the backlash in creating female-gendered bots, and determined female developers are beginning to step into the industry.

Non-gendered personalities and more real-world applications could be the future of AI, and luckily people of all backgrounds are willing to embrace the challenge.