A Look at Holograms and Ethics

This forum is for discussing Chuck's videos as they are publicly released. And for bashing Neelix, but that's just repeating what I already said.
User avatar
CrypticMirror
Captain
Posts: 926
Joined: Sat Feb 11, 2017 2:15 am

Re: A Look at Holograms and Ethics

Post by CrypticMirror »

Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Okay, Elon. Sure thing.
DanteC
Officer
Posts: 55
Joined: Wed Mar 08, 2017 9:13 pm

Re: A Look at Holograms and Ethics

Post by DanteC »

I'd like to see Chuck take this further with other media. Other than TOS, I don't really remember Trek dealing with AI's, other than Data. Weirdly enough, the only other show I can think of which dealt with sentient holograms was Red Dwarf, where they really don't make the distinction between software and a dead person. Rimmer is still the smeg-head as he was alive, he's just a collection of 1's and 0's from now on. And they did interesting stories with it, the most memorable one being the episode with the holographic ship. And that's just holograms. I'd love to see SFDebris cover Person of Interest at some point.

I remember saying something about this on another forum page (the Concerning Flight episode I believe), that holograms in Trek could be incredible game-changers. If essentially they're just light, an AI and forcefields, then they can repair engines, ward off intruders, work as first-responders etc, and that's just fulfilling typical Starfleet duties. The emitter the Doctor has is way in advance of any tech they have (it'd have to not just project light and forcefields, but store the holograms code as well), but if it'd ever figured out how it works...

Finally, the Doctor is functionally immortal. If that means we get Robert Picardo, arguably the best thing about Voyager, in more Trek, then that's good.
User avatar
clearspira
Overlord
Posts: 5668
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

CrypticMirror wrote: Sun Dec 20, 2020 11:59 am
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Okay, Elon. Sure thing.
I find it interesting that you only clipped that line and missed out the rest of it. His/her reasoning for that position is sound even if you do not agree with it. Which BTW isn't ''we are slaves'' its ''we too suffer from programming.'' The survival urge. The reproductive urge. You are born with these things; no one can just turn them off and thus factor into your actions no matter how much you may try and ignore them.

And speaking personally, whilst you do have free will to say yes or no to anything, its also a lie to suggest that there isn't a limit. Long hair on women, short hair on men as an example. We've long since past the point whereby women can have short hair and men long hair. You won't be cast out of your village for being a contrarian any more.

And yet, walk down any busy street and you will count something in the region of 90% for the former. But why? Is your ''free will'' really dictating to you that you love long hair that much? Or is it that gender is yet another form of programming that is influencing you beyond your capability to resist? Programming based on a societal construct perhaps, but its still programming.

Free will is shackled far beyond any of us like to admit. Particularly those of us who live in the West where we've been raised with the idea of theoretical freedom and individuality.
User avatar
Nealithi
Captain
Posts: 1438
Joined: Mon Jun 18, 2018 11:41 pm
Location: New Jersey

Re: A Look at Holograms and Ethics

Post by Nealithi »

I have several points on AI and the items mentioned here.
First I always find our human fear of AI being because media always ends up describing said intelligence as god like in its abilities. It will always be able to take over the internet and and control everything and wipe us out! Why? Why would the AI actually be capable of any of that? A few points, I have a computer attached to the internet right now. It is neither controlling the internet nor is being controlled by the internet. It is just an interface. To my computer this is not different than a sense of vision or taste to us. Viruses spread every where but they are not A singular controlling thing. They are copies that run independently. So even if I wrote an AI program. It is not moving into the internet and taking over.
That brings me to Trek and AI there. Data, is aware and makes choices. He was not made to be a Star Fleet officer. He chose that. He chose to resign in a Measure of a Man due to unfair treatment. He faces destruction every time he goes on an away mission. But it is not a guarantee. Who here would consent to being sent to have your brain disassembled? Any takers? No? So it is not unreasonable for Data to deny that as well. I feel they did prove he has volition of his own and self awareness.
That brings up the EMH. For the love of god this is not how computers work. You don't take a program you copy them. Now let's assume you can make a program that evolves over time and adds to its own self. I can believe a program like the Doctor could achieve sapience. He never denies being a hologram. He has concerns about security so he can't be used against others. He worries about his own ethics some how failing that he harmed someone to cause change and prevent harm to others. That screams sapience to me. But as a program he has technical limitations. I do not recall any time the Doctor simply ran on a computer. Or took over ship systems by being in them. If there is no holo-emitter he does not run. To me this means the program is in his visual program. He is unconscious unless the emitters are running him. The best example I have is a movie. It is a program on a computer. But can do nothing but be run and show images. It can't even use built in commands unless running properly. So holograms require very specific computers to run on at all.
User avatar
Riedquat
Captain
Posts: 1898
Joined: Thu Mar 09, 2017 12:02 am

Re: A Look at Holograms and Ethics

Post by Riedquat »

clearspira wrote: Sun Dec 20, 2020 1:01 pm And yet, walk down any busy street and you will count something in the region of 90% for the former. But why? Is your ''free will'' really dictating to you that you love long hair that much? Or is it that gender is yet another form of programming that is influencing you beyond your capability to resist? Programming based on a societal construct perhaps, but its still programming.

Free will is shackled far beyond any of us like to admit. Particularly those of us who live in the West where we've been raised with the idea of theoretical freedom and individuality.
I disagree with that. That there are desires, whether innate or a result of the society you've been brought up in influencing you, doesn't change the concept of free will. We often act against the basic instincts (those who don't - the sort who'll punch anyone who annoys them, or press themselves on anyone they find attractive, end up in prison). There are probably very few who follow 100% every social norm. We can all choose, and sometimes we will. Free will isn't about not having any constraints (that would result in anarchy, and not surviving long). Where there's debate about it is about to what degree the universe is completely deterministic, and where it isn't where it's just random.

On the computer front I see no reason not to consider the possibility of a machine being sentient and sapient. It's the outcome that matters, not the implementation, so whether it's biological or electronic or mechanical or something else I don't regard as relevant. It does irk me though that the alread-raised points about the writers not understanding the difference between the hardware and the interface - there's no reason to believe in Trek that holograms are anything special, unless there are processes being operated by the configuration of forcefields and energy within them rather than the projecting computer (a definitely non-sentient example - have a hologram of a simple machine, like a waterwheel or lever and have it interact with the world).

I agree with those who say it's a combination of software and hardware. A sufficiently complex computer could run an exact simulation of a human being. If it does then that combination is, in my book, just as much a sentient and sapient creature as any of us. The same machine could also be running some spreadsheets at the same time, which definitely aren't. And could save the state of the human simulation and stop running it, just chugging along with some rather more prosaic calculations, and is then no more an AI than the machine I'm using to make this post.

Probably worth throwing in Mass Effect's geth to this discussion too.
Thebestoftherest
Captain
Posts: 3740
Joined: Thu Feb 28, 2019 2:22 pm

Re: A Look at Holograms and Ethics

Post by Thebestoftherest »

clearspira wrote: Sun Dec 20, 2020 10:20 am
Thebestoftherest wrote: Sat Dec 19, 2020 11:00 pm
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
Don't take this the wrong way, but I feel that is a gross oversimplification of the human condition.
Human beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.
That kinda cynical.
User avatar
clearspira
Overlord
Posts: 5668
Joined: Sat Apr 01, 2017 12:51 pm

Re: A Look at Holograms and Ethics

Post by clearspira »

Thebestoftherest wrote: Sun Dec 20, 2020 3:50 pm
clearspira wrote: Sun Dec 20, 2020 10:20 am
Thebestoftherest wrote: Sat Dec 19, 2020 11:00 pm
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
Don't take this the wrong way, but I feel that is a gross oversimplification of the human condition.
Human beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.
That kinda cynical.
To believe anything else is to believe that there is some purpose to human beings beyond us merely being the animal that got lucky. And that steers this debate onto God.
User avatar
Robovski
Captain
Posts: 1217
Joined: Sat Mar 11, 2017 8:32 pm
Location: Checked out of here

Re: A Look at Holograms and Ethics

Post by Robovski »

clearspira wrote: Sun Dec 20, 2020 5:46 pm
Thebestoftherest wrote: Sun Dec 20, 2020 3:50 pm
clearspira wrote: Sun Dec 20, 2020 10:20 am
Thebestoftherest wrote: Sat Dec 19, 2020 11:00 pm
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.

Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.

----

Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.

You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?

... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
Don't take this the wrong way, but I feel that is a gross oversimplification of the human condition.
Human beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.
That kinda cynical.
To believe anything else is to believe that there is some purpose to human beings beyond us merely being the animal that got lucky. And that steers this debate onto God.
You disregard our own pressure for selection once we started reasoning choices.
Fianna
Captain
Posts: 684
Joined: Sun Jan 14, 2018 3:46 pm

Re: A Look at Holograms and Ethics

Post by Fianna »

CrypticMirror wrote: Sun Dec 20, 2020 11:59 am
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Okay, Elon. Sure thing.
I mean, I think that we live in a deterministic universe. Every action is the result of actions that occurred previously, which were in turn the result of even earlier actions, which were the result of actions earlier still, and so on and so on, all the way back to the Big Bang. If you knew the exact position, motion, and composition of every speck of matter/energy in the universe, and had a powerful enough processor to analyze all that data, you could determine every single thing that was going to happen in the future with 100% accuracy.

Humans aren't an exception to that. If someone had the exact same body as me, and the exact same memories as me, and was placed in an environment exactly identical to the one I'm in now, then they would behave in exactly the same way, because my actions are simply the result of the various factors (whether biological or environmental) that have shaped me. I only appear to have free will because those factors are so complex and hard to analyze, trying to use them to predict my behavior can never be completely reliable.
Thebestoftherest
Captain
Posts: 3740
Joined: Thu Feb 28, 2019 2:22 pm

Re: A Look at Holograms and Ethics

Post by Thebestoftherest »

Fianna wrote: Sun Dec 20, 2020 8:28 pm
CrypticMirror wrote: Sun Dec 20, 2020 11:59 am
Fianna wrote: Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Okay, Elon. Sure thing.
I mean, I think that we live in a deterministic universe. Every action is the result of actions that occurred previously, which were in turn the result of even earlier actions, which were the result of actions earlier still, and so on and so on, all the way back to the Big Bang. If you knew the exact position, motion, and composition of every speck of matter/energy in the universe, and had a powerful enough processor to analyze all that data, you could determine every single thing that was going to happen in the future with 100% accuracy.

Humans aren't an exception to that. If someone had the exact same body as me, and the exact same memories as me, and was placed in an environment exactly identical to the one I'm in now, then they would behave in exactly the same way, because my actions are simply the result of the various factors (whether biological or environmental) that have shaped me. I only appear to have free will because those factors are so complex and hard to analyze, trying to use them to predict my behavior can never be completely reliable.
Yes, and if I sweat gold I would never have to work a day in my life. I can't rely on that.
Post Reply