It happens often—maybe at a meeting or a dinner party. When Linda Berkeley, founder of the management consulting firm LEB Enterprises, tells people she teaches a graduate course called Ethical Design, she gets some puzzled looks.
“It’s not unusual for people to ask, ‘What does ethics have to do with design?’” says Berkeley, an adjunct professor in Georgetown University’s graduate program in Design Management & Communications.
Isn’t design about form and function, and aesthetics? Well, yes, says Berkeley, a former executive at National Geographic, Universal Studios, and the Walt Disney Company. But whether you’re building cars or websites, creating computer programs or TV commercials, designs that people use have an impact on their lives and society as well.
“Designers have a real responsibility to educate themselves and others about the power of design to solve problems or create them,” Berkeley says. “You can only do that if you accept individual and social responsibility for what you design and bring into the world.”
Exploring Ethical Questions
What does it mean to accept responsibility? For the students in Berkeley’s class—which includes professionals working in computer science, retail, transportation, graphic design, UX design, biomedical design, nonprofits, and library science—there is no simple answer. To provide a context for exploring ethical design questions, the class studies the theories of Aristotle, John Stuart Mill, and other moral philosophers. Then they use these frameworks to analyze some design issues of today.
“First do no harm”—a reference to the theme, if not the exact words, of the Hippocratic Oath—would seem like a good place to start. But in practice, even a directive as basic as this can be hard to follow.
Berkeley cites the example of Juul, the widely popular nicotine device that has amassed 70 percent of the e-cigarette market. Its creators, both former Stanford University design students, set out to make a better e-cigarette than what was currently available to help smokers like themselves curb their habit and eventually quit.
However, when their company rolled out Juul in 2015, its advertising campaign featured young, attractive models and invested heavily in social media, where middle and high school students congregate. After critics charged that its marketing was targeting youth, Juul switched to ads featuring older people who used Juul to stop smoking.
But it was too late: “Juuling” had become highly attractive to adolescents, and by last September, FDA Commissioner Scott Gottlieb was calling youth e-cigarette use “an epidemic.” Two months later, the tobacco giant Altria, which owns Philip Morris, paid nearly $13 billion to buy a 35 percent stake in Juul labs, making the two founders each worth more than $1 billion.
Two designers set out to do good and end up … that’s open to interpretation. The company maintains its sole purpose is to improve “the lives of the world’s 1 billion adult smokers by eliminating cigarettes,” and it has pledged $30 million to support youth smoking prevention efforts.
The Impact of Poor Design
The Juul story speaks to the founders’ motivation and whether or not that changed over time. But in other cases that Berkeley presents, ethical transgressions tend to emanate from bad, or thoughtless, design.
She cites two cases that are widely studied: Those of the Ford Pinto, the automaker’s first subcompact car, which experienced several deadly fires in the 1970s because of the vulnerably of its gas tank to rear-end collisions; and Therac-25, a chemotherapy machine developed in the early 1980s, whose design flaws killed four patients after exposing them to massive amounts of radiation.
One facet of the Therac-25 story that occurs often in poor design is the lack of attention given to the user. For example, when Therac-25 began delivering potentially deadly levels of radiation, the words “Malfunction 54” would appear on its screen, without explanation. The user was left to figure out what “Malfunction 54” actually meant, completely unaware that it could be, in fact, signaling a life-and-death situation.
Today, we talk a lot about user interface and user experience—how the customer interacts with a product or invention and responds to that interaction. But there was a time not so long ago when such ideas would have sounded foreign.
However, Berkeley tells the class about one iconoclastic figure, the 20th century Austrian designer and educator Victor Papanek. Papanek called for greater awareness and social responsibility within the design profession and spoke of the imperative to serve the greater good and protect the environment before such ideas were fashionable. Not surprisingly, he was initially ostracized by his peers.
“There are professions more harmful than industrial design,” Papanek wrote at the start of his groundbreaking work, “Design for the Real World,” “but only a very few of them.”
The Ubiquity of Bias
Harm can occur not just by neglecting the needs of the user, but also by forgetting those of all users and building products tailored to the needs of the majority—or those who hold the majority of the power. As an example, Berkeley points to an ill-designed soap dispenser that wouldn’t produce soap for African Americans: Their skin wasn’t light enough to be picked up by the device’s sensor because the dispenser apparently was tested only on white people.
An errant soap dispenser is one thing; the stakes are immeasurably higher when deep-seated biases creep in to other designs. Take, for example a type of court sentencing software called COMPAS that used an algorithm to assess the probability that a defendant would commit more crimes. According to an investigation by ProPublica, the algorithm correctly predicted the recidivism rates for blacks and whites at about the same percent. But when it was wrong, it significantly overestimated the risk that blacks would re-offend and underestimated the risk for whites.
We may assume that an algorithm is neutral, says author Sara Watchter-Boettcher, who offered this example in an online workshop called “Design for Real Life.”
But “our work is never neutral …,” she says. “Machines learn from us. We choose what to teach.”