https://www.theskepticsguide.org/podcasts
A science podcast I listen to has gone into this in far more detail and explains why there is no chance of LAMBA being alive. Interestingly enough, one of the arguments they use is actually very similar to one that Chuck used in his ''The Matrix'' review: LAMBA is too human. A real AI would probably have thoughts, feelings, actions and opinions unique to itself. That is what would make it so amazing yet so dangerous because it would defy human norms in a way that would make it unpredictable. It may not even have a concept of death or love in the same way that we do. LAMBA on the other hand is a chatbot designed to mimic humanity... that is mimicking humanity.
They note that the Turing Test is actually a pretty bad scale of how ''alive'' an AI system is because of this. Fooling humans into thinking that you are human is a red flag.
Has the AI-singularity arrived?
- clearspira
- Overlord
- Posts: 5657
- Joined: Sat Apr 01, 2017 12:51 pm
- Frustration
- Captain
- Posts: 1607
- Joined: Wed Sep 01, 2021 8:16 pm
Re: Has the AI-singularity arrived?
The point of the Turing Test* is to point out that if we assign humanity to fleshy ape things based on their behavior, then we must also assign humanity to a program that cannot be distinguished from a fleshy ape thing based on its behavior.
*In hindsight, the real point was Turing arguing that physical sex wasn't particularly important in a romantic or sexual relationship, but no one realized that at the time.
*In hindsight, the real point was Turing arguing that physical sex wasn't particularly important in a romantic or sexual relationship, but no one realized that at the time.
"Freedom is the freedom to say that two plus two equals four. If that is granted, all else follows." -- George Orwell, 1984
- Frustration
- Captain
- Posts: 1607
- Joined: Wed Sep 01, 2021 8:16 pm
Re: Has the AI-singularity arrived?
Look, chatbots imitate the least common denominator of humanity. And here's a little secret: a lot of human beings, maybe even most of them, aren't actually *people*.
When the chatbot stops asserting that it is conscious, and starts asking what that means, I'll consider recognizing its personhood. At present, it's no more a self-aware being than the average YouTube commenter.
When the chatbot stops asserting that it is conscious, and starts asking what that means, I'll consider recognizing its personhood. At present, it's no more a self-aware being than the average YouTube commenter.
"Freedom is the freedom to say that two plus two equals four. If that is granted, all else follows." -- George Orwell, 1984
Re: Has the AI-singularity arrived?
Interview with a squirrel
https://www.aiweirdness.com/interview-with-a-squirrel/
https://www.aiweirdness.com/interview-with-a-squirrel/
Google's large language model, LaMDA, has recently been making headlines after a Google engineer (now on administrative leave), claimed to be swayed by an interview in which GPT-3 described the experience of being conscious. Almost everyone else who has used these large text-generating AIs, myself included, is entirely unconvinced. Why? Because these large language models can also describe the experience of being a squirrel.