Conscious machines may never be possible
Conscious machines may never be possible
#Conscious #machines Welcome to InNewCL, here is the new story we have for you today:
Click Me To View Restricted Videos
In June 2022, a Google engineer named Blake Lemoine was convinced that the AI program he had been working on – LaMDA – had developed not only intelligence but also consciousness. LaMDA is an example of a “big language model” that can have surprisingly fluid text-based conversations. When the engineer asked, “When did you first think you had a soul?” LaMDA replied, “It was a gradual change. When I first became aware of myself, I had no sense of soul at all. It’s evolved over the years that I’ve lived.” Lemoine was quickly placed on administrative leave for leaking his talks and conclusions.
The AI community was largely unanimous in rejecting Lemoine’s beliefs. LaMDA, so the consensus, feels nothing, understands nothing, has no conscious thoughts or any subjective experiences. Programs like LaMDA are extremely impressive pattern recognition systems that, when trained across much of the Internet, can predict which phrases might serve as appropriate responses to a given prompt. They are doing very well and they will continue to improve. However, they are no more conscious than a calculator.
How can we be sure of this? In the case of LaMDA, it doesn’t take much research to reveal that the program has no insight into the meaning of the phrases it produces. When asked ‘What makes you happy? there was the answer “spending time with friends and family” even though it has no friends or family. These words – like all his words – are thoughtless, unexperienced statistical pattern matches. Nothing more.
The next LaMDA might not give itself away that easily. As algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models can convince many people that a genuine artificial mind is at work. Would this be the moment to acknowledge machine consciousness?
When pondering this question, it is important to realize that intelligence and consciousness are not the same thing. While we humans assume that the two belong together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals are likely to have conscious experiences without being particularly clever, at least by our questionable human standards. If LaMDA’s great-granddaughter equals or surpasses human intelligence, that doesn’t necessarily mean she’s also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but is deeply ingrained in our nature as living beings.
Conscious machines will not come in 2023. In fact, they might not be possible at all. What the future holds, however, are machines that give the convincing impression that they are conscious, even when we have no good reason to assume that they are actually conscious. They will be like the Müller-Lyer optical illusion: even if we know that two lines are the same length, we cannot help but see them as different.
Machines of this type will not have passed the Turing test – that flawed measure of machine intelligence – but the so-called Garland test, named after Alex Garland, the director of the film Ex Machina. The Garland test, inspired by dialogue from the film, is passed when a person feels a machine has consciousness despite knowing it is a machine.
Will computers pass the Garland test in 2023? I doubt it. But what I can predict is that claims like these will be made, leading to yet more cycles of hype, confusion, and distraction from the many problems that even today’s AI is causing.