The incredible achievements of AI and the need for ethical boundaries
Artificial Intelligence has already changed - and is continuing to change - many aspects of our lives. We now have conversational assistants that enable us to do all sorts of things with the power of our voice. We unlock our phones using facial recognition, and AI’s deep learning capabilities are being used in health care to identify medical conditions more efficiently. And these are just a few examples.
But as is the case with just about everything in life, it is possible to take a good thing too far. Just because we can do something does it mean we should? As technology evolves, so do its potential applications - which can be beneficial or not. And while there’s no clear-cut way to get there, we need to define and uphold ethical boundaries as the technology and its use keeps growing.
Like what? A few examples come to mind:
AI beyond the final frontier
In 2013, Jessica Courtney Pereira, Joshua Barbeau’s wife, passed away. After eight years of trying to get his grief-stricken life back together, Barbeau discovered Project December: a sophisticated web-based platform where one can interact with - and even create - chatbots. But these weren’t just your run-of-the-mill chatbots that can provide weather forecasts. Here’s how the San Francisco Tribune puts it:
“Designed by a Bay Area programmer, Project December was powered by one of the world’s most capable artificial intelligence systems, a piece of software known as GPT-3. It knows how to manipulate human language, generating fluent English text in response to a prompt. While digital assistants like Apple’s Siri and Amazon’s Alexa also appear to grasp and reproduce English on some level, GPT-3 is far more advanced, able to mimic pretty much any writing style at the flick of a switch.”
Then, one night in September, by filling in information about her and feeding the AI some of her text messages, Barbeau created a text chatbot version of his deceased wife, with whom he exchanged conversations, in a seamlessly convincing manner, for months.
The point here is not to judge Barbeau or any other person who would be tempted to do the same thing. Any one of us could go down that path, given the right circumstances. That’s not what this is about.
This is about the incredible sophistication of GPT-3, which can answer complex questions, sense, and adapt to the emotions of those with which it interacts. It can make spontaneous statements and provide different answers to the same questions based on many factors. The point is that the experience of interacting with a GPT-3-based chatbot will be indistinguishable from chatting with an actual human being. Where does that leave us as a society?
In Barbeau’s case, he knew “what” he was interacting with. But with such a sophisticated ability to emulate real people, are we opening ourselves up to all sorts of manipulation. Are we creating a new golden age of fraud and identity theft?
There’s another piece to this puzzle…
Did they actually say that?
On July 16, 2021, Roadrunner, a documentary about the life of the late Anthony Bourdain, was released. The documentary uses past recordings of Bourdain as narration in parts of the film. But that’s not all. Filmmaker Morgan Neville used AI technology to digitally recreate - in other words, to deep fake - Bourdain’s voice and synthesize three quotes the late chef had not spoken (or at least not recorded). And the results are more than convincing. You can watch the official trailer here. Can you distinguish the synthesized speech from the real one? Unlikely.
Of course, all this sophisticated tech is awe-inspiring and has enormous benefits: better, more natural, and more efficient AI interactions.
In turn, this benefits AI users and almost every industry out there, from the voice tech industry itself to education, to health care, entertainment, marketing, and the list goes on.
But we still need to put this in a cost/benefit perspective. Perhaps there are certain parts of this technological evolution that we’d rather not experience. I mean, chatting with fake people is concerning enough and already opens up a can of worms. But now we could be having telephone conversations with emulated people, likely friends, and relatives? Again, the potential for manipulation, fraud, and theft is staggering. As the technology evolves and provides more convenience and economic growth, its potential for collateral damage also grows.
We’ve been at similar crossroads before in our history, and on the whole, we tend to fix the biggest bumps on the road. And I think this is likely to happen here too. While I don’t know precisely the best way forward, keeping this conversation going (no pun intended) will be crucial. So I look forward to engaging with all of you on these complex questions as we all strive to make voice tech an integral part of a better world for all.