Wow this is a fun discussion.
CrypticMirror wrote: ↑Sat Dec 19, 2020 8:13 pm
I can't see holograms as people. They are elaborate interface elements between a computer and a person, a glorified version of Microsoft's Clippy. And they are all, even the Doc from Voyager, nothing more than that. If they are sapient, then it is the ship's computer behind them which is sapient and not the interface. None of them can be evolving beyond their programming, because they are always a product of the programming of the computer behind them. There seems to be no expectation that the Voyager computer is alive, or the computer behind Quark's holosuites, and that means that any degree of personhood is something we the user projects onto them.
Just like the droids from Star Wars. They are appliances and interfaces, given the simulation of personalities for ease of user interaction, but nothing more than that. Even the Moriarty is just lines of code that have been edited to remove their perception filter, and trawl from beyond their original library for additional interactions. Delete his programme, and it would be no more murder than turning off a lamp.
A problem is that how things like holograms work is all over the place. Sometimes it is indeed treated as computer puppetry, it is a human being talking yet it could as easily be a desk lamp, there is nothing about the thing doing the talking directing what is said, it is just a recording and so on. However sometimes the implication is the holograms are actually complicated low level simulations of the human beings projected. This is apparent in say Picard where the various holograms on the ship each have bits and pieces of the captain's personality and memory because they are all based on brain scans of him. So it seems like they actually have little holographic neurons and their behaviour is the result of the interactions between each of those little holo-neurons all working together to create the aggregate effect of high level behaviour, just as in old fashioned human beings.
We know form that Voyager that you can replace working lungs with a holographic ones, presumably we can replace neurons with holo-neurons likewise, so if someone say was having a degenerative nerve condition with neurons slowly dying you could replace each one with a holo-neuron and presumably the person's behaviour and biological function would never change. I guess some people think that as neurons get replaced by holo-neurons somehow the person would go from being sapient to non-sapient despite nothing changing, this is really unconvincing and unmotivated to me.
The fact that a given computer which is running them is not itself sapient (or not sapient in the same way at least), is no problem. The atoms that make up your body are not intrinsically sapient (or at least not intrinsically your sapience) since they were happily the atoms of untold numbers of other people through history. Nothing you do can change the basic way the atoms that make you up behave, they will behave the same way (obey the same laws of physics). Now the particular combination of atoms in series in time is in some way sufficient to be you, but why not then the right combination of computing actions are in someway the hologram.
Things get hairier when we consider is there really a difference in kind between say the low level behaviour generating functions of a machine by say using artifical neurons and more streamlined functional structures that get the same behaviour and function.
Basically I am sure that in principle any characteristic of human beings can be achieved in a machine, so either human beings are NOT sapient or machines can be. It is really tricky to tell how holograms and other Star Trek AIs are supposed to work so I see no basis on which you could really rule out there sapience (unless we want to deny human sapience, maybe stopping human hearts is no more murder than turning off a lamp). Any physical characteristic could certainly be achieved to any level of specificity and detail you like and heck why not any non-physical property, if there is some non-physical soul stuff say whose to say Star Trek technology does not work by manipulating that? Also if you are invoking some kind of non-physical stuff or even physical stuff you don't understand and arbitrarily saying no machine could do that why not say brunettes lack it say (brunettes don't have a soul etc.), sounds about as equally motivated, if the premise is we just don't understand and so can't describe the principle involved and say who it applies to or not, saying it can not apply to brunettes makes as much sense as saying it can't apply to machines, to me anyway.
Note often you can tell a machine is not sapient (or at least not sapient in the same way as a human), as in the example invoked elsewhere of a recording. A recording looks exactly like the original (from a certain point of view), but in this case we know exactly why it is not sapient like the thing recorded, first just on a functional level it just says the same thing no matter what, often we can take apart the recording and examine the tape or whatever and see the process by which the words it says and so on are formed and realize they are nothing like the causal process that makes the words in the original instance. Even if you take a bunch of recordings and have different ones play in response to prompts, that will be harder to catch out in a conversation (in terms of function, theoretically even with a finite number of recordings [say a recording of every phoneme in English] you could string them together into an infinite number of conversations so we would never catch the machine out as failing to be original or whatever in the conversation), but it is easy to analyze and dissect the causes of that things behaviour and see why it differs, you could essentially break it open and find every conversation tree, that does not seem to be the case with the cause of human beings having conversations, the options are not already stuck in the brain waiting for the right stimulus, they are structures generated by the structure of the brain. However when we get into possibilities like the machine just having artificial neurons (little bits of matter with the same kind of causal powers as all natural organic neurons) and it generating conversations by the combined action of those neurons, then it just seems to me impossible to say what makes the machines conversation different from a human, either they are both sapient or neither is sapient.
Fianna wrote: ↑Sat Dec 19, 2020 9:56 pm
If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.
I agree that machines and humans have (in principle) the same potential for free will, but I am more of the opinion that it totally is a thing that exists.
I always think a better definition of what people tend to mean by free will requires determinism. Essentially what most people worry about with free will is whether people are responsible for and in control of their actions.
Well if we hold people responsible for their actions, it is because we think there is something about them that led to that action, there character, thoughts and so on determined their actions. If people just unpredictably did whatever (or even one of two options etc.) it would make no sense to hold them responsible for their actions as nothing about them led to that action.
Likewise if the deterministic nature of things means we don't control our actions it would mean that thermostats don't control the temperature in the room, which is a deep misunderstanding of the word control. Determinism is precisely what makes thermostats control the temperature in the room and likewise is just what I need to control my actions.
Note being unpredictable (whether due to quantum randomness or due to the feed back effect that you can yourself make predictions and so thwart those predictions etc.), not simply obeying stereotyped programming and so on, is not unique to humans or hard and does not make you intelligent or sapient or whatever. A thing that is too inflexible in its behaviour, thoughts and so on too predictable and limited may fail to be intelligent that does not mean flexibility is the same thing as intelligence. The things that make a human mind a mind are complicated, admit of endless degrees and so on. So sure lots of things will appear to be sapient that are not, they will have a lot of the characteristics enough to seem like it for awhile, but not enough (likewise some human beings under the influence of drugs or a disease will not really be sapient/aware but merely sleep walking or the like). However that is why I find the machines just can't be sapient argument bizarre, it is not like there is one little thing you can just add or remove to the way a being is and behaves that turns it from really sapient to just an incredible simulation as far as I can see, so how could you uniquely identify what machines must lack to make the argument in the first place.