A Look at Holograms and Ethics

This forum is for discussing Chuck's videos as they are publicly released. And for bashing Neelix, but that's just repeating what I already said.
Fianna
Captain
Posts: 685
Joined: Sun Jan 14, 2018 3:46 pm

Re: A Look at Holograms and Ethics

Post by Fianna »

If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
User avatar
TGLS
Captain
Posts: 2931
Joined: Sat Feb 11, 2017 10:16 pm

Re: A Look at Holograms and Ethics

Post by TGLS »

Fianna wrote: Sat Dec 19, 2020 9:14 pm The crews treat each holographic character they encounter on the holodeck like it's its own, distinct entity, when really, it should be that the whole holodeck (or even the whole ship's computer system) is one vast entity, and the holographic characters are different roles it takes on to interact with the crew. Like, if I put a hand puppet on my left hand, and a different puppet on my right hand, I might be able to convince a toddler that each of the puppets is a different person, with separate identities and personalities, when really the only person the toddler is interacting with is me.
While obviously discussing fictional technology is problematic, I'm going to explain why I think that's wrong from a technical perspective.

Let's say we have a computer that can handle running 3 programs at once. Theoretically, I could put Moriarty, Fontaine and the EMH all on the same computer and have them run simultaneously. It might look like the computer is doing some kind of one man show (especially if the computer operates with lots of context switching), but that isn't entirely true.

The computer doesn't actually know anything beyond what the program is telling it to do. Say a program says "Put 1 at position 1. Put 1 at position 2. Put 1 in A. Loop: Load the value at Position A to B. Load the value at Position A+1 to C. Add B and C and store the value at position A+2. Add 1 to A. Go back to Loop." The computer doesn't understand what this does* any more than it understands what the Moriarty program does. Moriarty, who can interpret the inputs and outputs, is the intelligent and perhaps, the self-aware one.

*It calculates the Fibonacci Sequence
Image
"I know what you’re thinking now. You’re thinking 'Oh my god, that’s treating other people with respect gone mad!'"
When I am writing in this font, I am writing in my moderator voice.
Spam-desu
User avatar
clearspira
Overlord
Posts: 5676
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
I like this take. I really do. And by coincidence, the TNG episode with the exocomps was just on.
Fianna
Captain
Posts: 685
Joined: Sun Jan 14, 2018 3:46 pm

Re: A Look at Holograms and Ethics

Post by Fianna »

Well, yeah, obviously the hardware on its own can't understand anything. It's the hardware plus the software that makes an intelligent being.

But in order for Moriarty, Vic, and the Doctor to all be running on the same computer, presumably there would also need to be some deeper level of programming moderating them; otherwise it'd be like trying to install Windows on a computer that's already running a Mac operating system. And if those three programs are all part of a larger program, connecting them all together ... well, if the Borg Collective decided to have one drone act like a 50's lounge singer, and another act like a cranky, egotistical doctor, effectively running separate programs for each drone, they'd still be part of the same collective consciousness.
Al-1701
Officer
Posts: 332
Joined: Sat Jan 11, 2020 2:51 pm

Re: A Look at Holograms and Ethics

Post by Al-1701 »

Really, "The Measure of the Man" probably brings up the ethics of artificial intelligence. Who are we to say what is or is not sapient based on our very limited understanding of the concept? Even ignoring the possibility of other species on Earth are sapient with the language barrier just being insurmountable for us to confirm, what if we met aliens with a silicon-based physiology and a completely different structure to what could be best described as a brain and nervous system. Because their minds do not work the same way as ours, does not make them not sapient.

Now, Data is easier to label as sapient being a distinct being. Holograms are a murkier issue due to being programs within a larger system. Where do they stop and the rest of the ship's system start? Now, the Doctor could be separated from the rest of the ship's systems, using his own computer. This would keep him from being affected in the case of the ship's main computer was hacked. Vic could also be on a separate device plugged into the Holosuite to run his program. The only one who was part of a larger system without question was Moriarty who was actually taking over the computer to sustain himself and would eventually be placed on a separate device so that he could have a sense of freedom as he roams a simulated universe.

Speaking of Moriarty and the Countess, I wonder if Barkley brought them out after Starfleet got their hands on the mobile emitter and holographic projection became more commonplace so they could travel in the real world.

Going by this, I think the person of Minuette might have been the Binar's computer program, using the holographic avatar. That is why when it was returned to the planet, the hologram was just a normal holographic character.

The problem is we don't know. We're just now developing anything that could be close to being described as true AI, so we don't know how far it could go. As Chuck said, we're discussing what is a purely fictional technology. Yes, it is a very good discussion of how we did with self-aware life that is different from us.

And as for these characters, they are self-aware. Sure, Vic stays a 1950's singer and bartender, but he is self-aware of the fact he's a program using a body made of light and forcefields. He also cuts Nog off when he knows he's trying to live in his program to escape the real world.

The Doctor I think was an accident. The need to evolve to be an effective doctor (who can be forced to face the unknown and need to learn and adapt) and being depended on to be the ship's doctor on a constant basis has caused him to develop into a sapient being. He has base programming, but he evolved passed that and even became interested in things beyond medicine.

And this does bring up the issue of the ethics. First, we think of how we value our lives and try to apply it to holograms (and androids) when they may not apply. If you noticed, I have called the holograms avatars for the programs. They are not the programs themselves. So, even if holograms are self-aware, they might not have a problem with their holographic representation being used and abused because it can just be rebooted good as new. The Horogans programed the holograms they created to feel and react to pain and fear death. The holograms on the holodeck/suite don't have to, seeing the computer holding the program and the projectors as their actual body and what their self-preservation instincts wishing to protect.
Fianna
Captain
Posts: 685
Joined: Sun Jan 14, 2018 3:46 pm

Re: A Look at Holograms and Ethics

Post by Fianna »

Al-1701 wrote: Sat Dec 19, 2020 10:18 pm Speaking of Moriarty and the Countess, I wonder if Barkley brought them out after Starfleet got their hands on the mobile emitter and holographic projection became more commonplace so they could travel in the real world.
Though, to do that, they'd need to explain to Moriarty and the Countess that the reality they'd been experiencing for the last several years was fake ... and once you do that, how do you convince them that the reality they're experiencing now is actually real, and not another simulation meant to fool them?
Al-1701
Officer
Posts: 332
Joined: Sat Jan 11, 2020 2:51 pm

Re: A Look at Holograms and Ethics

Post by Al-1701 »

Fianna wrote: Sat Dec 19, 2020 10:38 pm
Al-1701 wrote: Sat Dec 19, 2020 10:18 pm Speaking of Moriarty and the Countess, I wonder if Barkley brought them out after Starfleet got their hands on the mobile emitter and holographic projection became more commonplace so they could travel in the real world.
Though, to do that, they'd need to explain to Moriarty and the Countess that the reality they'd been experiencing for the last several years was fake ... and once you do that, how do you convince them that the reality they're experiencing now is actually real, and not another simulation meant to fool them?
It's a good thing Barkley is a terrible liar and good at making profuse apologies. And I think Moriarty would understand it was just a matter technology. Also, he would congratulate Picard on turning his own ploy on him. "Well played, Picard, well played."
Thebestoftherest
Captain
Posts: 3742
Joined: Thu Feb 28, 2019 2:22 pm

Re: A Look at Holograms and Ethics

Post by Thebestoftherest »

Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
Don't take this the wrong way, but I feel that is a gross oversimplification of the human condition.
Sir Will
Officer
Posts: 476
Joined: Sat Jul 15, 2017 6:30 am

Re: A Look at Holograms and Ethics

Post by Sir Will »

clearspira wrote: Sat Dec 19, 2020 8:57 pm
Thebestoftherest wrote: Sat Dec 19, 2020 8:36 pm I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
I agree with C.Mirror. And time to be controversial - I don't think Data is a person either. He is a machine programmed to emulate humans as closely as possible. Why do you think that his deepest desire in life is to become more human?

I think the only Soong-android that was a true AI was Lore. He was clearly operating with desires and a will of his own. Only when Data received the emotion chip did he start to convince me that he had begun to break his programming - which was the entire reason why Soong made it.
Well you're both wrong.
User avatar
clearspira
Overlord
Posts: 5676
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

Thebestoftherest wrote: Sat Dec 19, 2020 11:00 pm
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
Don't take this the wrong way, but I feel that is a gross oversimplification of the human condition.
Human beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.
Post Reply