A Look at Holograms and Ethics

This forum is for discussing Chuck's videos as they are publicly released. And for bashing Neelix, but that's just repeating what I already said.
Post Reply
User avatar
CrypticMirror
Captain
Posts: 926
Joined: Sat Feb 11, 2017 2:15 am

A Look at Holograms and Ethics

Post by CrypticMirror »

Holograms and Ethics video.
https://sfdebris.com/videos/startrek/zholodeck.php

I can't see holograms as people. They are elaborate interface elements between a computer and a person, a glorified version of Microsoft's Clippy. And they are all, even the Doc from Voyager, nothing more than that. If they are sapient, then it is the ship's computer behind them which is sapient and not the interface. None of them can be evolving beyond their programming, because they are always a product of the programming of the computer behind them. There seems to be no expectation that the Voyager computer is alive, or the computer behind Quark's holosuites, and that means that any degree of personhood is something we the user projects onto them.

Just like the droids from Star Wars. They are appliances and interfaces, given the simulation of personalities for ease of user interaction, but nothing more than that. Even the Moriarty is just lines of code that have been edited to remove their perception filter, and trawl from beyond their original library for additional interactions. Delete his programme, and it would be no more murder than turning off a lamp.
Thebestoftherest
Captain
Posts: 3741
Joined: Thu Feb 28, 2019 2:22 pm

Re: A Look at Holograms and Ethics

Post by Thebestoftherest »

I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
User avatar
CrypticMirror
Captain
Posts: 926
Joined: Sat Feb 11, 2017 2:15 am

Re: A Look at Holograms and Ethics

Post by CrypticMirror »

Thebestoftherest wrote: Sat Dec 19, 2020 8:36 pm I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
I didn't say there weren't real, just not a person or truly sapient. Not unless the computer behind them is. It is the computer behind them that is pulling those strings. They are real [within the conceit of the fictional world] but real just means they are a thing which exists. Picard's desk is real. The viewscreen is real, the holograms are exactly as real as those are. No more, no less.
User avatar
TGLS
Captain
Posts: 2931
Joined: Sat Feb 11, 2017 10:16 pm

Re: A Look at Holograms and Ethics

Post by TGLS »

CrypticMirror wrote: Sat Dec 19, 2020 8:47 pm
Thebestoftherest wrote: Sat Dec 19, 2020 8:36 pm I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
I didn't say there weren't real, just not a person or truly sapient. Not unless the computer behind them is. It is the computer behind them that is pulling those strings. They are real [within the conceit of the fictional world] but real just means they are a thing which exists. Picard's desk is real. The viewscreen is real, the holograms are exactly as real as those are. No more, no less.
So would Data be considered sapient? Someone who had their brain simulated on a computer? Someone who had their brain replaced with computerized neurons? Someone who had only part of their brain replaced with computerized neurons? My cat?
Image
"I know what you’re thinking now. You’re thinking 'Oh my god, that’s treating other people with respect gone mad!'"
When I am writing in this font, I am writing in my moderator voice.
Spam-desu
User avatar
clearspira
Overlord
Posts: 5676
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

Thebestoftherest wrote: Sat Dec 19, 2020 8:36 pm I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
I agree with C.Mirror. And time to be controversial - I don't think Data is a person either. He is a machine programmed to emulate humans as closely as possible. Why do you think that his deepest desire in life is to become more human?

I think the only Soong-android that was a true AI was Lore. He was clearly operating with desires and a will of his own. Only when Data received the emotion chip did he start to convince me that he had begun to break his programming - which was the entire reason why Soong made it.
User avatar
clearspira
Overlord
Posts: 5676
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

TGLS wrote: Sat Dec 19, 2020 8:56 pm
CrypticMirror wrote: Sat Dec 19, 2020 8:47 pm
Thebestoftherest wrote: Sat Dec 19, 2020 8:36 pm I can't agree with that, data is consider a person why would being made of data, electricity and metal be consider real when data electricity and light be consider not real.
I didn't say there weren't real, just not a person or truly sapient. Not unless the computer behind them is. It is the computer behind them that is pulling those strings. They are real [within the conceit of the fictional world] but real just means they are a thing which exists. Picard's desk is real. The viewscreen is real, the holograms are exactly as real as those are. No more, no less.
So would Data be considered a person? Someone who had their brain simulated on a computer? Someone who had their brain replaced with computerized neurons? Someone who had only part of their brain replaced with computerized neurons?
Well... this gets into why I ridicule the idea of transhumanism. I simply do not believe that transferring a human mind into a computer is anything more than a copy and paste job. I died and then someone else walked off with my memories. And that's the best case scenario. I might be left a vegetable whilst someone else walks off with my memories.

As far as a I am concerned, any robot that has my memories is just a robot. Its a tape recording working a program. Nothing more.

Someone who only has part of their brain replaced for me depends on what part of their brain is replaced. A chipset to enhance your brainpower is fine; someone completely removing your prefrontal cortex is something else entirely.
User avatar
Beelzquill
Officer
Posts: 453
Joined: Tue Dec 19, 2017 4:55 am

Re: A Look at Holograms and Ethics

Post by Beelzquill »

Is it possible for a nonsapient being to create a sapient one? That is the question with Criptic Mirror's "the computer is pulling the strings". I personally think that yeah, it is possible for the computer to be nonsapient but carry out algorithms designed by humans that accidentally has self realization. I think Vic Fontaine is a person, I'm not sure about Moriarty, it's been a decade since I've seen any of his episodes, and I think the EMH is a person more or less.
User avatar
Beelzquill
Officer
Posts: 453
Joined: Tue Dec 19, 2017 4:55 am

Re: A Look at Holograms and Ethics

Post by Beelzquill »

clearspira wrote: Sat Dec 19, 2020 9:03 pm
Well... this gets into why I ridicule the idea of transhumanism. I simply do not believe that transferring a human mind into a computer is anything more than a copy and paste job. I died and then someone else walked off with my memories. And that's the best case scenario. I might be left a vegetable whilst someone else walks off with my memories.

As far as a I am concerned, any robot that has my memories is just a robot. Its a tape recording working a program. Nothing more.

Someone who only has part of their brain replaced for me depends on what part of their brain is replaced. A chipset to enhance your brainpower is fine; someone completely removing your prefrontal cortex is something else entirely.
Wouldn't you yourself be a mere clone of yourself by that logic? Doesn't the human body replace every cell, including the neurons every five years or so? What if your brain is replaced just as gradually with nanites or something?
Fianna
Captain
Posts: 685
Joined: Sun Jan 14, 2018 3:46 pm

Re: A Look at Holograms and Ethics

Post by Fianna »

The problem with how Star Trek treats holograms is that the writers often demonstrated little understanding of the difference between software and hardware. Observe when the Doctor or Moriarty are transferred from one computer system to another, and it's treated as though they have been physically moved to a different location, rather than remaining on the old computer system while a copy of them is created on another one.

Now, I'm hardly well-versed in computer engineering, but my understanding is that it's nonsensical to talk about a program being intelligent or self-aware. A computer can be intelligent or self-aware, while the program determines the form of that intelligence.

For Data, that distinction is irrelevant, because his positronic brain is devoted entirely to running what could be called the "Data program", and no one tries running a separate program on his hardware or downloading his programming onto a different computer. But for the holograms, the self-aware ones are being run on the same computer system that's running all the non-self-aware holograms, as well as running all the props and settings and other aspects of the holodeck.

The crews treat each holographic character they encounter on the holodeck like it's its own, distinct entity, when really, it should be that the whole holodeck (or even the whole ship's computer system) is one vast entity, and the holographic characters are different roles it takes on to interact with the crew. Like, if I put a hand puppet on my left hand, and a different puppet on my right hand, I might be able to convince a toddler that each of the puppets is a different person, with separate identities and personalities, when really the only person the toddler is interacting with is me.

----

On a separate note, I'd like to point something out: an AI being self-aware is not the same as having free will. The AI might be able to pass the Turing Test, and might be able to express emotions, wants and desires ... but all those wants and desires still had to be programmed into them. So if a computer programmer knows what they're doing, they could simply program the AI's driving passions to be things like "serving humans is awesome!"

Moriarty and the Doctor have desires at odds with their intended purpose, because them being sapient beings was an accident. Meanwhile, Vic Fontaine's sapience was (as far as we know) a deliberate choice by his creator. And since his creator wanted a lounge singer for a 1950's casino simulation, Vic was designed to want nothing more out of life than to be a lounge singer in a 1950's casino simulation.

(I think it was Douglas Adams who posited a society that solved the ethical issues of eating meat by genetically engineering animals who could talk, who were fully self-aware and intelligent ... and who loved being killed and eaten, who would eagerly consent to it without reservation. They'd even recommends parts of their body as being especially tender.)
User avatar
clearspira
Overlord
Posts: 5676
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

Fianna wrote: Sat Dec 19, 2020 9:14 pm The problem with how Star Trek treats holograms is that the writers often demonstrated little understanding of the difference between software and hardware. Observe when the Doctor or Moriarty are transferred from one computer system to another, and it's treated as though they have been physically moved from one place to another, rather than remaining on the old computer system while a copy of them is created on another one.

Now, I'm hardly well-versed in computer engineering, but my understanding is that it's nonsensical to talk about a program being intelligent or self-aware. A computer can be intelligent or self-aware, while the program determines the form of that intelligence takes.

For Data, that distinction is irrelevant, because his positronic brain is devoted entirely to running what could be called the "Data program", and no one tries running a separate program in his hardware or downloading his programming onto a different computer. But for the holograms, the self-aware ones are being run on the same computer system that's running all the non-self-aware holograms, as well as running all the props and settings and other aspects of the holodeck.

The crews treat each holographic character they encounter on the holodeck like it's its own, distinct entity, when really, it should be that whole holodeck (or even the whole ship's computer system) is one vast entity, and the holographic characters are different roles it takes on to interact with the crew. Like, if put a hand puppet on my left hand, and a different puppet on my right hand, I might be able to convince a toddler that each of the puppets is a different person, with separate identities and personalities, when really the only person the toddler is interacting with is me.

----

On a separate note, I'd like to point something out: an AI being self-aware is not the same as having free will. The AI might be able to pass the Turing Test, and might be able to express emotions, wants and desires ... but all those wants and desires still had to be programmed into them. So if a computer programmer knows what they're doing, they could simply program the AI's driving passions to be things like "serving humans is awesome!"

Moriarty and the Doctor have desires at odds with their intended purpose, because them being sapient beings was an accident. Meanwhile, Vic Fontaine's sapience was (as far as we know) a deliberate choice by his creator. And since his creator wanted a lounge singer for a 1950's casino simulation, Vic was designed to want nothing more out of life than to be a lounge singer in a 1950's casino simulation.

(I think it was Douglas Adams who posited a society that solved the ethical issues of eating meat by genetically engineering animals who could talk, who were fully self-aware and intelligent ... and who loved being killed and eaten, who would eagerly consent to it without reservation. They'd even recommends parts of their body as being especially tender.)
For me, free will is the only true test of what I would term to be an AI. Lore, Moriarty and the Doctor all display signs of wanting to be more and achieving it. Data for me never does until he gets the emotion chip upgrade.

A good example would be Kryten. He was made to scrub floors and clean toilets. And yet Lister helped him break that programming so now he scrubs floors and cleans toilets BECAUSE HE LOVES IT. No one is forcing him to do it. If he wants to, he will go on holiday and frequently does.

Frankly I have my doubts about Vic Fontaine being alive. I was never convinced that he was anything more than a lounge singer and barman programmed to be a better listener than Guinan and a better counsellor than Ezri.

Regarding the animal engineered to be eaten, it is basically the ''she was asking for it'' argument. Sex doll consent is actually a legitimate concern for some people believe it or not. How intelligent does a robot woman saying ''yes'' have to get before she wants to say ''no'' but doesn't have the vocal ability to express it?
Post Reply