Re: A Look at Holograms and Ethics
Posted: Sun Dec 20, 2020 11:59 am
An opinionated discussion of all things SFDebris
https://sfdebris.net/forum/
I find it interesting that you only clipped that line and missed out the rest of it. His/her reasoning for that position is sound even if you do not agree with it. Which BTW isn't ''we are slaves'' its ''we too suffer from programming.'' The survival urge. The reproductive urge. You are born with these things; no one can just turn them off and thus factor into your actions no matter how much you may try and ignore them.
I disagree with that. That there are desires, whether innate or a result of the society you've been brought up in influencing you, doesn't change the concept of free will. We often act against the basic instincts (those who don't - the sort who'll punch anyone who annoys them, or press themselves on anyone they find attractive, end up in prison). There are probably very few who follow 100% every social norm. We can all choose, and sometimes we will. Free will isn't about not having any constraints (that would result in anarchy, and not surviving long). Where there's debate about it is about to what degree the universe is completely deterministic, and where it isn't where it's just random.clearspira wrote: ↑Sun Dec 20, 2020 1:01 pm And yet, walk down any busy street and you will count something in the region of 90% for the former. But why? Is your ''free will'' really dictating to you that you love long hair that much? Or is it that gender is yet another form of programming that is influencing you beyond your capability to resist? Programming based on a societal construct perhaps, but its still programming.
Free will is shackled far beyond any of us like to admit. Particularly those of us who live in the West where we've been raised with the idea of theoretical freedom and individuality.
That kinda cynical.clearspira wrote: ↑Sun Dec 20, 2020 10:20 amHuman beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.Thebestoftherest wrote: ↑Sat Dec 19, 2020 11:00 pmDon't take this the wrong way, but I feel that is a gross oversimplification of the human condition.Fianna wrote: ↑Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.
----
Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.
You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?
... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
To believe anything else is to believe that there is some purpose to human beings beyond us merely being the animal that got lucky. And that steers this debate onto God.Thebestoftherest wrote: ↑Sun Dec 20, 2020 3:50 pmThat kinda cynical.clearspira wrote: ↑Sun Dec 20, 2020 10:20 amHuman beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.Thebestoftherest wrote: ↑Sat Dec 19, 2020 11:00 pmDon't take this the wrong way, but I feel that is a gross oversimplification of the human condition.Fianna wrote: ↑Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.
----
Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.
You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?
... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
You disregard our own pressure for selection once we started reasoning choices.clearspira wrote: ↑Sun Dec 20, 2020 5:46 pmTo believe anything else is to believe that there is some purpose to human beings beyond us merely being the animal that got lucky. And that steers this debate onto God.Thebestoftherest wrote: ↑Sun Dec 20, 2020 3:50 pmThat kinda cynical.clearspira wrote: ↑Sun Dec 20, 2020 10:20 amHuman beings are nothing more than intelligent animals. The only thing that separates you from any other mammal on Earth is a million years of random chance in which we happened to get lucky.Thebestoftherest wrote: ↑Sat Dec 19, 2020 11:00 pmDon't take this the wrong way, but I feel that is a gross oversimplification of the human condition.Fianna wrote: ↑Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.
----
Think of this: there's no reason for an AI to have a fear of death. We have a fear of death because we're the product of natural selection, and without that instinctive, emotional response, our ancestors would never have survived long enough to pass on the genes that would one day create us. But a computer isn't going to have such an instinct unless someone programs it into them. So you could have an AI that's marvelously intelligent, capable of great insight and passion, and passes the Turing Test with flying colors ... yet, when someone goes to shut it down, it doesn't mind in the least, because self-preservation isn't something it cares about. That portion of the emotional spectrum was left out of its makeup.
You might say that the AI incapable of that emotion makes it not fully a sapient being, but I disagree. Among animals on our planet, and (if they exist) alien beings across the universe, there are almost certainly emotional responses that are completely foreign to human beings. We might be able to understand them on an intellectual level, but we'll never actually feel the urge to migrate south for the winter, or to swim upstream before spawning. If we can be sapient without experiencing those emotions, then why can't an AI be sapient without experiencing all of the emotions we humans have?
... Unless, of course, what we mean by "sapient" is actually "enough like a human being that we can relate to it". Which, for ethical discussions, is maybe the more important concern (it's no coincidence all the self-aware programs discussed here are ones designed to emulate humans).
I mean, I think that we live in a deterministic universe. Every action is the result of actions that occurred previously, which were in turn the result of even earlier actions, which were the result of actions earlier still, and so on and so on, all the way back to the Big Bang. If you knew the exact position, motion, and composition of every speck of matter/energy in the universe, and had a powerful enough processor to analyze all that data, you could determine every single thing that was going to happen in the future with 100% accuracy.
Yes, and if I sweat gold I would never have to work a day in my life. I can't rely on that.Fianna wrote: ↑Sun Dec 20, 2020 8:28 pmI mean, I think that we live in a deterministic universe. Every action is the result of actions that occurred previously, which were in turn the result of even earlier actions, which were the result of actions earlier still, and so on and so on, all the way back to the Big Bang. If you knew the exact position, motion, and composition of every speck of matter/energy in the universe, and had a powerful enough processor to analyze all that data, you could determine every single thing that was going to happen in the future with 100% accuracy.
Humans aren't an exception to that. If someone had the exact same body as me, and the exact same memories as me, and was placed in an environment exactly identical to the one I'm in now, then they would behave in exactly the same way, because my actions are simply the result of the various factors (whether biological or environmental) that have shaped me. I only appear to have free will because those factors are so complex and hard to analyze, trying to use them to predict my behavior can never be completely reliable.