“Artificial intelligence is the field devoted to building artificial animals and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).” So says the Stanford Encyclopedia of Philosophy, while the US Defense Advanced Research Projects Agency (DARPA) states that AI “is a programmed ability to process information.”
But artificial intelligence (AI) is extremely difficult to define, as the US Government Accountability Office attests to.
There’s are also an intense debate as to how “intelligent” AI is, as this article will explore. Yet this debate is evolving as rapidly as the technology itself, which has recently made great leaps and bounds.
How We Got Here
In 1956, a DARPA-sponsored conference took place in Dartmouth College in New Hampshire. Among participants where Professor John McCarthy, Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore, Ray Solomonoff, Oliver Selfridge, Allen Newell and Herbert Simon.
This conference is thought to be the first time when the term “artificial intelligence” was used. The event’s purpose was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it.”
The idea of intelligent machines was already in the air. In a famous 1950 article, Alan Turing asked: “Can a machine think?”
But this question had already perplexed philosophers and thinkers for centuries, most notably Descartes in his 1637’s Discours de la Methode. They questioned how it would be possible, at some point, to differentiate machines from humans. A question that’s still at the heart of the AI debate in 2023.
Chatbots to ChatGPT
AI technology has been through a number of evolutionary steps, as shown by this very good DARPA video.
These advances were mostly possible because of the significant increase in computing and storage power over the last 15 years.
Fast forward to November 2022 with the public launch of ChatGPT by US-based company OpenAI. At this point the world realised that, in some form, AI is capable of sustaining a seemingly reasonable discussion about any topic (it can also code, solve maths problems, all in various languages).
While chatbots have been available in various forms for years (Apple launched Siri in 2010), the success of ChatGPT shows that AI has the potential to disrupt many industries. But also to be a useful complement to human activities, rather than a replacement (even though this will also be true in somes cases).
Progress and Hope
AI has already fuelled many waves of exuberance and disappointment. What would make the current hype different? Most of the previous excitement was related to AI being able to “think” like humans do.
We know this is not going to happen anytime soon. ChatGPT might give the impression of reasoning ability, but it is just using statistics to add one word after another when answering a question. It is massively dependent on the existing body of internet information and sheer volume of data it is trained on.
Yet, the progress in AI and Machine Learning should not be overlooked. They are already embedded in many products like smartphones, bringing new capabilities such as speech and image recognition.
Those technologies help with fraud/spam detection, content moderation, mapping, weather forecasting, supply chain management and many other tasks.
According to a recent McKinsey report, adoption of AI has more than doubled between 2017 and 2022, with a larger number of companies making investments in AI to improve their operations and be more competitive.
Over a 10-year horizon, a Goldman Sachs report estimates AI could boost global GDP by 7 percentage points. In its assessment, the bank states that up to a quarter of US jobs could be replaced by AI/automation while the vast majority would use AI as a complement to existing activities.
The Obstacles
Progress in AI has been staggering, as ChatGPT shows, but the real challenges will come when we try to segregate content/interactions coming from humans or from machines.
"By exploiting newer augmented reality and virtual reality technologies, the ability to synthetically create complex environments and allow for human interaction will blur the lines between realities and will increasingly open a huge new set of ethical and legal challenges regarding their proper use, according to Jeffrey L. Turner and Matthew Kirk from law firm Squire Patton Boggs.
The massive use of AI raises questions relating to intellectual property: who owns the content the technology uses to train itself and become more efficient? These issues have so far not properly been dealt with. There are also questions of “computer ethics” which govern how to properly manage machine-human interactions.
The idea of endowing machines with a moral code – which code to choose is a separate problem – is one of the questions AI will have to deal with (which is part of the field of deontic logic).
Where Next?
In a 2000 article, Bill Joy wrote: “The 21st-century technologies – genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.”
He posits that machines (using AI) would become so powerful that we would depend on them and have to accept their decisions.
Not everybody agrees with this gloomy view.
“We now know that AI will succeed in producing artificial animals,” the authors of the AI entry in the Stanford Encyclopedia of Philosophy say.
AI’s progress also raises wider economic questions, such as the need for a basic universal income (an idea proposed by many thinkers, including AI expert Martin Ford). After all, a number of jobs or activities that are usually done by humans will be done by AI-driven robots. Progress in this field is already impressive (see for instance this video from Boston Dynamics).
Despite the huge progress in recent years, the idea of “Strong AI” – the ability to create machines with mental capabilities and consciousness – still seems far away.