This Is Why We Can’t Have Nice Things…or Can We?

This Is Why We Can’t Have Nice Things…or Can We?

What human behavior can show us about using human-centered design in the service of better applied data.
Article

Raise your hand if you’ve recently heard or said the following: We can automate that. We will just get an algorithm to do that. Machine learning will solve it.

As a designer and data expert, I love the acceptance and enthusiasm of algorithms because all of those statements are probably true. The joy of my job is to remind (or educate) people that algorithms are designed. They don’t just “autoMAGICALLY” materialize, meaning we still need some humans (that’s us) to design them.

Designing data-driven products without humans at the center leads to a pernicious cycle of exploiting tech and losing the trust of consumers. This is bad for people and it’s bad for business. Data privacy, transparency of proprietary black box systems, and lack of regulation are all issues exacerbated by the problematic data product and service design we’ve seen in recent headlines. And when businesses or platforms misuse customer data, it creates an environment of mistrust, which is harmful to all our progress. This lack of trust can make clients unwilling to share data externally, choosing instead to keep work in-house and contributing to an echo chamber of biased datasets.

When it comes to building intelligent systems, we must remember: Be human. Be vulnerable.

Here’s what you need to know about human behavior in order to design ethical, intelligent systems:

  1. Humans are terrible at predicting the future, but we are excellent at building it.
  2. Without fail, we always go too far.
  3. But once we do, we find clarity, try to correct and move forward.

The Paradox of Trust

At the crux of the data issue is the “paradox of trust,” which essentially posits: you can’t trust something until you understand it or know it, but you can’t get to know something without first trusting it.

We are currently moving away from the physical machines people know and trust (computers or devices we can see and turn on/off), to intelligent, digital systems that are harder to “see” and understand—and therefore trust. As the general public has begun dipping their toes in to the data-driven products and services offered today, there have been a slew of bad actors that could set us back years with their breach of customers trust. In order to root out the bad actors and put the power back in customers’ hands, we must prioritize designing with humans in mind.

The golden utterance of voice-driven AI has presented a beautiful and juicy challenge to designers, and we are all navigating these uncharted waters together. Should we design voice syntax to be so perfect that I’m unaware it’s not a human? If an algorithm has so perfectly profiled my preferences that experiences have effectively been designed only for me, does it remove the possibility of choice? It’s the wild west, for now, as we all work to build this new language of interface and expectation.

Here are a few tools we have as we design intelligent, trustworthy “machines”:

  • Reveal what’s human vs. what’s machine: If humans don’t know they’re talking to a machine, how can they trust it?
  • Design all intelligence to fail: Intelligent systems cannot be impenetrable. They need to be designed with fail-safes (like the on/off buttons of yesterday’s machines) or small intentional errors.
  • Design intelligent processes, not just personas: Move beyond “artificial intelligence” as a single destination and consider that AI can be augmenting intelligence or even ambient intelligence.

It’s not magic — it’s hard work. But when done right, the results can be magical.

Beyond designing with humans in mind, we must also confront the larger issue around “artificial intelligence,” which is that the algorithms are in fact human-designed and thus flawed. Which means we must design with these flaws in mind.

Bias Begets Bias
Data on its own is not a great storyteller. While datapoints may seem objective, the truth is they often lack context, are full of organizational bias, or are otherwise limited in scope. Biased data will only lead to biased outcomes. Ultimately, algorithms are no substitute for a conversation, no matter how much we want them to be. While there is a temptation to offload knowledge to mathematical logic, this not useful long-term. Bias begets more bias, and only further contributes to the cycle of violating consumer trust.

Data is a Matter of Time
Time is an important component of designing data products and services. It takes time to gain access to data, capture new data, cleanse it and make sense of it. When using data to design intelligent systems, it takes time to train the data so it performs as intended. The problem, however, is that time is a luxury in business environments. Rigid budget cycles, staffing changes and general stamina to see a project through all stand in the way of allowing data to reveal its value over time. For leaders to go beyond using data for optimization and start using data for transformation, they need to take a long view to success. It may take time to begin seeing return on investment from data science application, but ultimately it’s worth it.

Practitioners Make Perfect
Progress isn’t always the result of having a powerful visionary at the helm. At frog, we believe it’s the practitioners who are able to bring their real, day-to-day experiences to the design research process and unlock new potential applications for technology. No one, no matter how seasoned or creative, can design everything at once. It takes use cases from people on the ground to understand and act on full potential. In design, being “in the trenches” is a celebrated research technique. Yet it takes a lot of work to build something new, so it’s worth getting input from the people who will actually be using what you’re building. Otherwise, you may be digging really great trenches in entirely the wrong spot.

As practitioners of human-centered design, we are working with clients to build better intelligent and data-based systems, products and services every day. This means designing fail-safes into systems that don’t necessarily have an “on/off” switch. It also means recognizing the limits of current state of the form. If in the 20th century the challenge was to create new interaction design to bridge the gap between human and machine, today’s challenge is to understand how humans interact with the new intelligent environment. And we know that putting humans first is the only place to start.

Author
Karin Giefer
Design Strategist, frog
Karin Giefer
Karin Giefer
Design Strategist, frog

Karin Giefer is a seasoned design strategist, specializing in the human, business and infrastructure systems of both our physical and digital world. Part designer, part data-driven business strategist, part implementation specialist, Giefer helps organizations determine what to make and do, why to do it and how to innovate contextually, both immediately and over the long term.

Illustrations by Doug Chayka
Cookies settings were saved successfully!