Photo by Serkan Turk on Unsplash

Designing down the rabbit hole

Why design shouldn’t be detached from ethics and how can designers advocate for a more genuine human-centered design?

Jana Voykova
The Startup
Published in
10 min readDec 16, 2019

--

Ethics is a tricky word. As soon as we mention it, the conversation suddenly becomes awkward. Probably because nowadays, ethics amounts to reading incomprehensible texts on moral and political philosophy. Who needs ethics anyway? Every time I say that we should be careful with disruptive technologies like autonomous transportation, people look at me as if I’ve told them we should consider massive sterilisation. Ethics is a controversial topic because we seem to attribute to it qualities it traditionally does not possess (I’ll expand on this later). It’s always frustrating to hear people say that X is the future, as if the future is some fixed, predetermined state. It’s not.

In the late 1960s, Victor Papanek wrote a book about the importance of the design profession, where he identified the enormous social responsibility designers bear. Papanek was a designer. His famous quote from Design for the Real World:

“There are professions more harmful than (industrial) design but only a few of them”,

seems all the more relevant in today’s technology-dominated societies. I hope we can all agree on a few things:

  • we cannot separate design from technology because technology has become an intrinsic segment of our lives, permeating the very purpose of our existence, and so designers inevitably end up designing for technology;
  • by no means do I blame technology for all of humanity’s problems or aim to tarnish it as the cause of all evil in the modern world. After all, rejecting technology means rejecting progress and this is, evidently, hopefully, naturally, not what we’d want from our future;
  • most importantly, technology itself isn’t doing anything bad, simply because we are the ones who decide how to shape it, develop it, and eventually use it.

Who are those “we” that I am referring to? If you consider yourself a designer by either profession or vocation, you have probably heard that one way of looking at design is believing that everything is design and everybody is a designer in a way. Before you get overwhelmed with rage that I am downgrading the design profession by saying that everybody is a designer, let me explain. Design (when thought of as a verb) and create are synonyms. If somebody creates something, from a physical object to a process to an entire organisation, we can say that they design it. Civil engineers design cities, teachers design school curriculums, hotels design experiences. Undoubtedly, the design process of a designer is very different from the design process of an engineer. Designers bring artistic perspectives to the table and are much more skilful at bridging the divide between functionality and aesthetics, but ultimately, both designers and engineers create artefacts that are going to be used by people. This is one of the reasons why human-centered design has recently become so essential to the work of, not only design agencies but also technical companies, and even public sector institutions. Or has it really?

What is human-centered design?

Human-centered design, as Donald Norman defined it in the 1980s, is based on the notion that design should not be disconnected from human cognition. It should be easy to understand, function in a predictive way and cater to the needs of those for whom it is intended. Sounds logical, doesn’t it?

Human-centered design or user-centered design as it’s also known, are great concepts indeed. Designing with the user/human in mind is an amazing, necessary and self-evident approach to how designers should work. Unfortunately, we have overused, overstated and misrepresented human-centered design to the point that it has almost entirely lost its meaning. Individuals and corporations feel compelled to define human-centered design as part of their work style, regardless of whether they actually apply it. Also, we are all very empathetic. I personally cannot say anymore when a company or a designer genuinely works in a human-centered fashion, just because everybody says they do. It is even more difficult to believe that humans are the central reference point for designers and technologists when the social space is constantly getting flooded with addictive devices and dodgy software, which undermine the core elements of what it means to be mortal. Where did the humans go?

As Douglas Rushkoff maintains in his book Team Human, people have become the ground and technology has become the figure. Technology is at the center while humans are in the background, backing it up. We are so focused on developing technology that we have, as Rushkoff eloquently puts it, become “oblivious to major changes in the background”. He makes a great analogy with how education works these days. Whereas before, competitive advantage was the benefit of education, now it is its primary purpose. Before, industry was adjusting itself to education, now it is the opposite — companies dictate what skills are valued and what qualities people must possess if they want to have lucrative careers. Sadly, this is not an honest mistake. In Team Human, Rushkoff talks about something called behavioural design theory. Essentially, this is a design philosophy taught at some universities, whose focus is persuasive manipulation. Today we (are taught to) design so that people change their attitude to match their behaviour, not the other way around. And instead of challenging this nonsensical way of treating people, we obediently follow, reaffirm and applaud. Unsurprisingly, we are on a fast track to designing our own obsolescence.

In 1974, the American philosopher Robert Nozick came up with a thought experiment called “the experience machine”. He argued that, faced with the dilemma of whether to have real-life experiences or be hooked to a machine which simulates reality, most people would choose the former. Thinking about today, when being online is a permanent state and we’ve become an extension of our digital devices at all times, maybe Nozick was wrong after all.

Contrary to the general perception, economic growth is not going to save us from global poverty and injustice. It is going to intensify them.

Rushkoff holds that corporations and capital growth can scale infinitely, while human capacity has limits. For better or worse, rather the latter, biotechnology and AI can overcome those limits. Being human has become shameful to an extent. Our imperfect human nature is tirelessly trying to mimic and catch up with the perfect nature of machines. We have to be constantly faster, stronger, more productive. Don’t get me wrong, I’m not saying that being efficient is a liability, quite the opposite. I just think we are using the wrong criteria to measure productivity. In the end, being imperfect is what makes us human. This can hardly be classified as a liability, no matter how advanced a technology companies will create, in order to satisfy their proclivities for dominance and financial growth.

How can we untangle the technological mess we’ve designed?

A good start would be to make sure that everybody — from designers, to subject matter experts to leaders, realise that there is a HUGE difference between what we can design and what we should design. The technological capacity to create something does not automatically make it morally permissible. Let’s take a step back and think about whether we should, how this new thing is going to impact people, what the consequences would be, how can we regulate it? Moral behaviour is not something that concerns only a part of the population. Ethics is not exclusive to the philosophy departments in academia and should also be taught to engineers, designers, economists — practically everyone. Instead of institutionalising the numeric superiority of machines over humans, we should remember the primary reason for advancing technology — to serve us humans and make us better, not urge us to desperately compete with it. Very few things in life should be done at any cost.

How does ethics fit in the context of technology?

Doing the right thing is not easy. Is it about weighing preferences and consequences (utilitarianism), giving each what they rightfully deserve (libertarianism), or falling back on our duties and obligations (deontology)? The complexity of this question is the sole reason why philosophers still haven’t found the ultimate answer to it. However, it is a collective concern, nobody gets to opt out of the conversation because someone decided that business and ethics are two conflicting camps. There’s more to our existence than making a transaction out of our humanity and handing over our cognitive abilities and privacy to machines which are, by definition, unable to differentiate between nefarious and virtuous intent.

The Ethics Centre, a non-profit organisation in Australia, argue in their paper Ethical by Design: Principles for Good Technology that ethicists alone cannot resolve the dilemma between humanity and technology. Shared responsibility and collaboration is needed across industries and domains, both academic and entrepreneurial. For one reason or another, we mistakenly take as a given what The Ethics Centre identify as four techno-logical myths*:

  1. Technology is value-neutral
  2. We should blame the artefacts if things go wrong
  3. We can’t halt the tide of technology
  4. We can hold off on the ethical questions

*Techno-logic — the logic of control: a way of thinking about technology as something that we can control, measure, store and use (Ethical by Design: Principles of Good Technology, The Ethics Centre)

SoftBank Robotics designs and manufactures interactive and friendly humanoid robots”. Pepper looks cute indeed but I find it disturbing that we need to make a distinction between friendly and unfriendly? robots, regardless of the fact that machines will allegedly take up the “boring” jobs and leave more exciting occupations to humans. Photo by Alex Knight on Unsplash.

Technology, design and ethics are not mutually-excluding. Caring about ethics does not mean rejecting technological evolution or giving up design expertise. Our pursuit of technological excellence is completely compatible with reinforcing our moral codes because technology is the means, not the end. The Ethics Centre outline an ethical framework in their paper, which describes the principles of good technology. Nonetheless, this framework is not a quick solution to all our technological bewilderment because, as stated earlier, those are questions humanity has been trying to figure out since ancient times. What this framework can do, is give designers and technologists an ethical restraint and help us create a truly genuine human-centered technology:

  1. Ought before can — before jumping off to create something, we should ask ourselves why we are creating it;
  2. Non-instrumentalism — technology should not bring people into submission or make them just mere tools for the machine to work;
  3. Self-determination — technology should provide us with freedom, not restrain it;
  4. Responsibility —we should think about the foreseeable unwanted uses of a technology;
  5. Net benefit — technology should bring positive contributions to the world, we should design technology in a way that enables it to self-correct and reduce the negative impacts it might have;
  6. Fairness — if there’s any difference in how our designs treat different groups of people, we should be able to defensibly explain why that is the case;
  7. Accessibility — we should design with the most vulnerable person in mind, edge cases marginalise and exclude people;
  8. Purpose — good design addresses a genuine problem with honesty and clarity and serves an ethical purpose.

Sounds great! But ethics is an expensive endeavour!

I’ve witnessed designers say that ethics is a luxury they can’t afford. They wash their hands by claiming that ethical behaviour is for the wealthy idealists who don’t have a rent to pay, a boss who commands them to do X or a certain standard of living to satisfy.

As Mike Monteiro holds in his book Ruined by Design, the third group doesn’t deserve even a fraction of our attention so let’s hope those people aren’t given enough freedom to exercise their profession the way they perceive it. The second group should realise that authority and the intimidation that often comes with it are lousy advisors, and next time someone tells them to do something unethical or else, they will lose their job, they either put this person in place or quit. Preferably both. As for the first group, we all have constraints in our lives, be it financial, emotional, or whatever. The way I see it, though, ethics is not a part of the trade-off scheme. We cannot put a price tag on our conscience. It’s a tough call sometimes, sure, but in the end, it all comes down to how we arrange our priorities. Besides, “everybody does it” is the slippery slope that got us here.

In case I haven’t convinced you of the seriousness of your profession yet, don’t take my word for it, take Mike Monteiro’s. His Designer’s Code of Ethics says it all concisely and without bells or whistles.

Working at a leading tech company is prestigious

I get it. Saying that one works for an established brand and saying that one works at the local library is not the same thing. Still, I agree with Alain de Botton that our work status should not determine if other people find us interesting to chat or hang around with. Nor does it define the kind of people we are.

A side note but quite relevant to the topic: Botton’s views on meritocracy are particularly interesting and worth reflecting on, especially if you are working in the birthplace of meritocracy Silicon Valley. According to Botton, meritocracy is a lovely ideology in purely theoretical terms but it also teaches us that our position in life and our successes are solely dependent on our merits. Meritocracy ignores factors like luck or circumstance, which is, in my opinion, a dangerous thing to teach people. It virtually tells us that if we fail, that’s because of our character and qualities or the lack thereof, and there’s no point in trying again because we’ll never succeed.

I am obviously not predicting a dystopian future where machines enslave humans and rob us of our humanity, simply because we are, more or less already living the initial stages of it. The good news is that this process is still reversible. We can, and have used technology for good in so many ways. But we need to know when to pull the plug. There’s no greatness in being responsible for the design of artefacts that dehumanise, embarrass, harm, and monetise other people, no matter how big the pay check is or how prestigious the title sounds. Those can be side effects of the designs and technologies we create but that doesn’t make them any less derogatory. Perhaps, we should become better at humility, on a general level as a species. And perhaps, we should all follow Mike Monteiro’s advice to be the person who asks “why” and says “no”, even if, and especially when nobody else does.

SOURCES:

Design for the Real World, Victor Papanek

Team Human, Douglas Rushkoff

Ruined by Design, Mike Monteiro

Ethical by Design: Principles for Good Technology, The Ethics Centre

Alain de Botton: A kinder, gentler philosophy of success, TedGlobal 2009

--

--