Jonathan101 wrote: ↑Sun Dec 30, 2018 9:23 pm
The problem is that the arguments used to establish that he "is" or "isn't", or that he "might be" or "might not be", are being made by TNG characters who were written by sci-fi writers who didn't really know much about real A.I. or how it would actually develop in the first place, so they end up coming down to "we don't know how he works and therefore we don't know if he's sentient or not", whereas in real life we know almost exactly how A.I. works and wouldn't be able to create it if we didn't, and establishing whether or not A.I. is self-aware- Hard A.I. vs Soft A.I.- is one of the key debates in the field...and leaning heavily towards "no, it isn't".
Note that if we have some analytical theory of consciousness such that we can plug in the structure of a creature's neurons and say "conscious" or not, we would presumably have in that theory the necessary causal elements of consciousness and we can build analogous elements into our computers etc. In which case we have achieved hard AI.
So if (deliberate) hard AI is probably impossible we are never going to have such an analytic test. So therefore we are probably going to have computational and neurological theories of the kind where given X input and Y structure we get Z behaviours (such as "intelligent behaviour").
You suggest we will know whether something is really conscious or whatever by whether the causes of its behaviour are the same or different from human (or other genuinely conscious beings) behaviour. So there will always been differences between a neurological instantiated system and an electronic one, in that one will be electronic and the other based in biolelectric signaling, biochemicals and so on. It seems as though we will never have the magic formula defining the necessary elements of consciousness etc. so we can't say, with certainty, which differences are salient and which are superfluous. So some say they are never the same sort of cause and just make hard AI impossible from the start. The structural causes will never be the same so its not going to happen.
This position has the tricky point of how not to fall into solipsism (the view that there is only one mind mine) after all no two brains are exactly the same therefore it might be some particular aspect of my brain that is generating consciousness. Why is it wrong to conclude: everyone else has intelligent behaviour but has a different cause such that it also fails to generate consciousness or the like.
I would suggest we be way looser and just say if we don't know the computer (or other human being, alien etc.) is lying to us (or rather misleading us) we assume it is telling the truth about internal life, motivations, pains etc. So for example I know whatever a person on a videotape says about its internal life, motivations, pains etc. (given a truly bizarre set of coincidences a video tape could appear to be having a conversation with me) the causal structure of how videotapes mean all those pretty words tell me nothing about how they are generated and similarly a giant list of conditional statements in a computer program might generate any conversation, but if I know it is such a list I will not believe anything it says about an internal mental life other than that it is a list of conditionals and so on.
The thing is just because I don't know that the computer is lying does not mean its not. Very probably an AI will be generated by some sort of evolutionary or learning procedure an ludicrously complex structure of causes generated to conform to whatever constraints are put on it. I will have the code etc. but I won't now how it works until I do a lot of analysis. On first analysis I may see no evidence that this structure is generating lies or misleading talk etc. However at some future point I might learn part of that structure is some layer of misleading causes making it lie about its internal life. But unless we have the magic formula for consciousness I am not sure we can do much better and avoid solipsism.
Anyway evidence that Data has human like mental life. He does not seem to lie about his motivations, rather his behaviours line up with his stated goals etc. He was effected by the modified water in the Naked Now, in a way similar to humans were (not unlike alcohol intoxication) suggesting the causes of his behaviour are similar to the causes of human behaviour (so if we make the bold assumption humans are conscious then Data probably is). Note dogs have similar causes of their behaviour but those behaviours fail the functional test of being intelligent, so you still need to do functional/behavioural intelligence tests, so in fact a lot of the talk in Measure of Man seems on point.
Jonathan101 wrote: ↑Sun Dec 30, 2018 8:09 pm
A robot brain is wildly different from a human brain, so even though it achieves the same effects on the outside, what causes those effects is very different, and we do in fact know what those causes are.
There are lots of kinds of robot brains some of which can be very similar in causal structure to human brains from many angles, use a little imagination. For example there is an episode of DS9 where they temporarily fix some brain damage to Kira's hot priest boyfriend (Vedic Borial?) by replacing large parts of his brain with positronic implants. A similar classic science fiction idea is to replace a brain neuron by neuron with electronic equivalents (artificial neurons). Imagine such a brain made of billions of artificial neurons arguably that sounds like a robot brain to me, but perhaps you want it all on one machine. Well each neuron can be perfectly emulated by a computer program (Church-Turing thesis) replacing those neurons with the computer. Further with a more powerful computer I can emulate multiple neurons with a single computer. Finally with a super powerful computer I can emulate all those billions of neurons with a single computer and hey I have a robot brain that has exactly the same set of structural causes as the original squishy neurological brain. The electronic neurons and biological neurons have different causes for their behaviour but they seem inessential, we don't think, well you could replace 10% of the brain with artificial neurons but more than that and we would lose consciousness, do we?