AI is making headline news in more ways than one. Conversations range from optimistic visions of an assisted humanity to prophecies of major doom. I love the promise of deep technology. It’s a great feat that we solve increasingly complex problems with it. But as I highlighted in my recent keynote at Startup Thailand this year, we need to approach technology with a human-centric lens.
Artificial intelligence, like all deep tech, is the magic of the 21st century: It’s exciting, a little bit scary, and most people have little concept of how it works. All this is great but means the vast majority of us may subconsciously view it as little more than entertainment. Venturing into the full potential of an algorithmic world would require us to want and need AI to look after pivotal aspects in our lives. In an age where voice assistants record crimes and monkey business, there’s still a trust barrier to overcome. Perhaps this is why they’ve yet to properly graduate from gadget territory.
Good interfaces are a crucial aspect of making technology indispensable. I think we have a long way to go in moving from visual to new digital interactions. As we do, we will possibly become equally hooked on those as we are on our smartphones today. But first, it’ll be interesting to define what interfaces look like for an autonomously deciding AI. Interfaces are about choices – if choices are made on our behalf, what will the future of interfaces actually look like? Why would we be at ease with decisions being made on our behalf? We find the answers to these questions in human nature.
We already have a myriad of decisions made for us by our own deep intelligence — the subconscious mind. We are on the whole unaware of the decision making going on, let alone the criteria under which the decisions were made. Yet these snap judgments we make every day feel so good to us because of the speed and certainty with which they help us get on with our lives that we rarely question if the heuristics under which they came about are valid, credible or even ethical. We get a gut feeling and we are compelled to follow it. So why are we so concerned about another digital subconscious intelligence doing some of that for us too? It does not have to be any better at it. Just no worse!
Let’s imagine a world governed by algorithms. Many have pointed out the inherently complex moral and organisational problems of this (e.g. deploying autonomous transport). A major dimension of this is the possibility of manual intervention. It presents a dilemma: If we all agree to have no option to intervene, we put ourselves at the mercy of a system which may be compromised. And if we do have the possibility to intervene, we need to account for individual erratic behaviour threatening collective safety. Since these problems lack straightforward answers and regulation is catching up, they will take time to resolve. Note how the issue here is not the technology, but collective trust and distrust in the integrity of actors in a system. Our brain has always been occupied for millennia with figuring out who to trust or distrust. Primarily this helps us survive. Secondarily it allows us to form social identities and collaborate effectively.
In the organic world, we have an instinct for trust mainly based on the behavior we see from others. Their reports of how they can be trusted are moot. If we don’t see the behavior that fits the promise then we have an instinct to bias towards what we see rather than what we are told. The proof of the pudding is in the eating. Our stomach and its many receptors of the neurochemicals of “feel good” will soon detect if the sweet words on the label were a lie. It’s a simple system for a complex world. Simple heuristics are the way the most ancient parts of our brain make enough sense of the myriad of information around us for us to act decisively and in our best interests.
Interestingly, this ancient part of our brain is highly algorithmic. Our automatic filtering and reward programmes build our emotive landscape. Impressions we form, conclusions we draw, decisions we make and consequences we bear are a function of processes we are hardly aware of. Ancient parts of our brain basically drive us as much as ever. In the recent decade, tech companies have hacked this system very effectively and hooked us on our smartphones through digital products that exploit it. Habits have collective power because they define the status quo. Technology has managed to alter it for better and for worse, by making us trust in certain behaviours. Trust in gain from behaviour is the driving force between our economy and global markets. When the balance of trust rests in the status quo, things stay as they are, and when it tips, they change. Unfortunately, notions of trust are primarily formed in our subconscious brain. This makes them appear irrational and rather hard to comprehend.
But for AI to be trusted the formulae is potentially quite simple. Does it deliver clearly and perceptibly on its promise? The hidden digital persuaders can only fool those of us with no demands of AI and no way to detect if those demands have been met. If the history of human behavior and technology tells us anything it is that humans love and keep tools that work and discard the ones that don’t. If AI has no benefits to humankind it will go the way of the motorized twirling spaghetti fork. We did not understand the outcome it provided and we had no way of telling if it was achieving it. I guarantee you don’t have one. Initial trust for our human mind is a very simple algorithm: do I immediately think I see you doing close to what I predicted you would do? If yes then trust.
The degree to which our world will be governed by artificial intelligence and its algorithms boils down to whether the algorithms forming part of our natural intelligence permit this to be. If we want the promise of deep technology to be more than the magic of the 21st century, we need to design it accordingly. Artificial intelligence will change the world once natural intelligence extends its full trust and endorsement in it. Whether that will lead to paradise on earth or doom is a different issue to be debated.
Philipp Kristian Diekhöner is a keynote | TEDx speaker, global innovation strategist and author of The Trust Economy, published in English (2017), German (2018) and Simplified Chinese (2019). He has spoken at eminent global organisations such as Facebook, P&G, Microsoft, Turner, MunichRe, Zillow, Globe, CPA Australia, German Federal Ministry for Economics and Energy, Economist Intelligence Unit and many others.
World-renowned body-language thought-leader and founder of TRUTHPLANE® communication training company, Mark’s cutting-edge system of nonverbal communication techniques helps audiences become more confident, collaborative, and credible in their communication. Voted Global Gurus #1 Body Language Professional, Mark trains individuals, teams, CEOs of Fortune 500 companies and Prime Ministers of G7 powers. Mark is on faculty for The Kellogg-Schulich Executive MBA, and President of the National Communication Coach Association of Canada. Mark’s TEDx talk reaches millions, he is a go-to body-language commentator for CTV, CBC, Global, and regularly quoted in The Wall Street Journal, Washington Post and GQ.