Top 5 Challenges of Artificial Intelligence

Robot controlling free-floating digital screen

This article has been adapted from a chapter in the upcoming book, “Digital Strategy and Organizational Transformation” (World Scientific Publishing).

The artificial intelligence (AI) revolution is well underway and it’s transforming several industry sectors and the global economy. If you ask Alexa, Google, or Siri, they might not have answers to the ethical questions facing our society due to the rapid expansion of AI technologies.

Students in Master’s in Strategic Digital Transformation program at Georgetown University learn inclusiveness, fairness, privacy and security, transparency, reliability, and accountability in our required “Ethics of Digital Innovation” course. They graduate knowing that job losses, redistribution of wealth, human-machine interaction, biased AI decisions, and false dilemmas are some of the biggest challenges our governments and social institutions must consider. Let’s explore each of the artificial intelligence challenges a little more closely.

1. Job Losses

Artificial intelligence excels at accomplishing repetitive actions, transcriptions, precision tasks, and complex problem management–which means job losses. According to IndustryWeek, up to 20 million manufacturing jobs worldwide will be lost to robots by 2030. In the U.S., displacement by automation will affect 512 counties, representing 20.3 million people, where more than 25 percent of workers could be looking for new jobs.

Low-skilled workers will be the most impacted by the expansion of AI, which means 40 million to 160 million women worldwide may need to change occupations by 2030. Development of AI systems will not only affect low-skilled jobs; AI programs exist to program software and write code.

Nonetheless, according to the World Economic Forum, AI system automation would also create 97 million new jobs by 2025. The major issues with this significant restructuring of the job market are two-fold:

  1. automation will affect a more economically vulnerable and less-educated workforce, and
  2. it is unlikely that upskilling and reskilling programs allow a direct transition of AI-induced unemployment toward technical high-paying jobs such as programmers, data scientists, data architects, and machine learning engineers.

AI job displacement and the economic inequality it will provoke represent a pressing ethical challenge that needs to be addressed.

2. Redistribution of Wealth

Another important ethical dilemma related to the economic impact of artificial intelligence is the redistribution of wealth created by automations and replacement of low-skilled workers by machines.

The argument here is fairly straight-forward: Assuming the cost of the workforce is the primary expense for most organizations (in hourly wages), then cutting human capital will significantly reduce expenses and therefore increase profits.

This accumulation of money for AI-driven companies will exacerbate the problem of wealth inequality in our society and the profit surplus won’t be redistributed into communities like it would if human workers spent their wages on lodging, products, and services. This situation could also jeopardize the economic vitality of disadvantaged communities and industries servicing a low-skilled workforce.

3. Human-Machine Interaction

Another growing area of ethical challenge is the blurring line regarding the interaction between human intelligence and machine. As we witness the deployment of smart assistants with voices that sound more and more human-like, the commercialization of emotional AI through “artificial friends” is growing in popularity.

Consider Replika, which offers a virtual friendship service through an AI-powered bot. Their website claims Replika is “[t]he AI companion who cares. Always here to listen and talk. Always on your side.” One can see the usefulness for elders who typically live isolated and lonely.

However, the broad commercialization of such services raises the issue of biased interaction with a fake friend who agrees with you all the time, altering individual perceptions of what real human interaction is about, including agreement and disagreements, competition and collaboration, like and dislike.

These binomial interactions are essential to balance our relationships with one another. Similar applications simulate romantic relationships (iGirl and Anima). In a less virtual world, Sophia is a social humanoid robot powered by AI machine learning models and developed by Hong Kong-based Hanson Robotics.

Activated in 2016, Sophia is the first robot to receive full citizenship from a country (Saudi Arabia). The fact that we are offering citizenship rights reserved for human beings to a machine alters the definition of humanity and introduces biases as it relates to roles and responsibilities of citizenry.

4. Biased AI Decisions

In recent years, news media have reported on several AI biases that affected people directly. For example, in 2018 Amazon deployed an AI tool to assist with its hiring process to improve the gender diversity of its workforce and develop a more objective recruiting process.

However, instead of improving the gender diversity of the new pool of recruits, the AI algorithm learned to prefer males’ resumes over females’ resumes mostly by negatively labeling any mention of the word “women” such as “women’s clubs” or “women’s association” and positively labeling words used in men’s resume like “captured” and “executed.”

In other words, the AI system construed through the learning process that two different forms of language were present in the training data made of male and female resumes. The AI algorithms learned to discriminate against women. This issue was detected and Amazon abandoned the project.

A more high-profile example centered on the face recognition feature of Apple’s iPhone X not recognizing the faces of Asian iPhone users. Specifically, the algorithm had difficulty distinguishing two people of Asian descent and was unlocking the phone.

At the time, Apple was claiming “[t]he probability that a random person in the population could look at your iPhone X and unlock it using Face ID is approximately 1 in 1,000,000 (versus 1 in 50,000 for Touch ID).” The issue was primarily caused by an insufficient number of Asian pictures in the data set used to train the face recognition algorithm.

These examples illustrate the two main categories of AI algorithmic biases.

  1. First, the cognitive biases that we attribute to human beings are also experienced by AI technology. Psychologists have identified more than 100 different cognitive biases, many of which can be replicated by artificial intelligence by way of design and programming.

    AI development is created by humans who can unknowingly introduce biases into algorithm coding and teach the machine using biased data sets. However, a machine could learn to identify a discriminant feature already in the data set without the programmers noticing it, as illustrated in the Amazon hiring algorithm experiment.
  2. The second bias emanates from skewed analysis based on a lack of representation in the data set used to train AI algorithms. An AI doesn’t know what it doesn’t know, and incomplete data can blind the AI applications to variations that exist on a specific subject, as seen in the iPhone X facial recognition flaw.

5. False Dilemmas

Finally, ethical debates have emerged regarding the moral responsibility of an AI in situations where human lives are at stake. One of the most cited is what should the AI of an autonomous vehicle do if a car accident with human casualties is unavoidable: crash and kill the passengers or hit and kill the pedestrians crossing the street?

This is a false dilemma for two reasons: first, this constraining scenario puts an AI into a dilemma that most human beings will never encounter. Second, human beings make all sorts of split-second decisions in stressful and tense situations that rarely follow logical or ethical reasoning.

It’s important to understand that AI machines are the imperfect product of the interaction between (1) artificial neural networks that we assume are thinking like humans do and (2) a perceived reality that the interpretation is mostly subjective and composed of many perspectives.

It is therefore questionable whether we should be discussing ethical AI or the ethical use of AI solutions. After all, when Amazon “fired” the gender-biased AI program, they did not fire the HR department who provided the dataset to train the algorithm, which was at the source of the discrimination to begin with.

Learn more