Areas where you'd respectfully disagree with Chuck

This forum is for discussing Chuck's videos as they are publicly released. And for bashing Neelix, but that's just repeating what I already said.
User avatar
BridgeConsoleMasher
Overlord
Posts: 11575
Joined: Tue Aug 28, 2018 6:18 am

Re: Areas where you'd respectfully disagree with Chuck

Post by BridgeConsoleMasher »

Hero_Of_Shadows wrote: Sun Dec 30, 2018 8:54 pm I think part of the problem with AI stories in current science-fiction is that writers actually want to talk about subjects like slavery, racism, prejudice and etc and simply substitute X race from the RL with androids or holograms, as opposed to doing some research into real AI they just assume that AI's will be humans with metal/translucent skin.
Well research into AI is kinda just computer science speculation. I'm sure there's in depth analysis, but you become acquainted with theoretical parameters when you take CS101.

I think it's helpful to look at biology as well, though beyond just the brain as a computer. You can include the central nervous system and it starts illustrating behavioral programming across animalia a little clearer. Also, there's this joke among biologists that mock Intelligent Design, noting that the idea is preposterous given how impractical a body is in terms of efficient design. It does in turn provide quite a distinctive juxtaposition when you try to compare our beings to computers.
..What mirror universe?
Jonathan101
Captain
Posts: 853
Joined: Mon Apr 02, 2018 12:04 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Jonathan101 »

BridgeConsoleMasher wrote: Sun Dec 30, 2018 8:43 pm

Back to the point of your original argument, what's so bad about MoaM? Did the episode actually set out to establish Data's self awareness? Because it seems as if they decided the case based on potential ramifications of Starfleet adopting Data and to which extent they could acknowledge his autonomy.
It's not "bad", it just has some of the same issues as the Orville episode "About a Girl" that Chuck was contrasting it against.

It didn't set out to establish that he was self-aware "per say", but to establish that he might be and thus it would be wrong to disassemble him against his will. They don't say definitively that he is, but Picard convinces them not to take the chance one way or the other.

The problem is that the arguments used to establish that he "is" or "isn't", or that he "might be" or "might not be", are being made by TNG characters who were written by sci-fi writers who didn't really know much about real A.I. or how it would actually develop in the first place, so they end up coming down to "we don't know how he works and therefore we don't know if he's sentient or not", whereas in real life we know almost exactly how A.I. works and wouldn't be able to create it if we didn't, and establishing whether or not A.I. is self-aware- Hard A.I. vs Soft A.I.- is one of the key debates in the field...and leaning heavily towards "no, it isn't".

The central debate at the heart of MoaM is whether or not Data is Hard A.I. or Soft A.I., but the questions they ask have little or anything to do with it, or at best are only basic questions, and in that regard it is closer to "About a Girl" than Chuck seems to realise.
Hero_Of_Shadows wrote: Sun Dec 30, 2018 8:54 pm I think part of the problem with AI stories in current science-fiction is that writers actually want to talk about subjects like slavery, racism, prejudice and etc and simply substitute X race from the RL with androids or holograms, as opposed to doing some research into real AI they just assume that AI's will be humans with metal/translucent skin.
^ This.

Exactly this.
User avatar
Riedquat
Captain
Posts: 1881
Joined: Thu Mar 09, 2017 12:02 am

Re: Areas where you'd respectfully disagree with Chuck

Post by Riedquat »

Jonathan101 wrote: Sun Dec 30, 2018 8:09 pm
Riedquat wrote: Sun Dec 30, 2018 5:14 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.
Really? Sure, you can measure some physiological effects but that just describes some of the mechanism, it doesn't demonstrate that there's a fundamental difference.
You can look at the differences between what is causing these seemingly identical behaviours.

For example, Alexa can speak and she can hear, despite not having a mouth or ears, vocal cords or ear drums. Her "experience" of speaking and hearing- insofar as she has experiences, which she doesn't really- is fundamentally different from a human or any other living creature.

It would be no different if she was an android- they might imitate hearing and speech, but the mechanics are fundamentally different, as would be sight, sound and touch. A camera does not experience "sight".
The mechanics are fundamentally different but the big question is whether that matters or not. For a Trek example, Geordie can see, and something vaguely similar in reality isn't massively far-fetched (I'm sure I recall reading something about work on electrical eye implants a few years ago). The question is really just what experiencing something actually means, and whilst at a first glance it seems straightforward, the difference between a human's sight and a camera's (even a camera connected to a computer capable of processing the image and doing something in response) I don't think there is a clear-cut boundary. Is there through all forms of life, or even just all animals that respond to stimuli, including the simplest? I'm not sure that there's a point of fundamental change on the scale from the simplest machine to a human being, even though a lever is clearly utterly un-alive and unaware.

We've not built anything that can do more than very imperfectly mimc some aspects of human behaviour, and a human-like AI is as much in the realms of science fiction as it's always been - what we've got now seems to lie a long way on the "just a machine" side, but there's a big grey area in there.
User avatar
Riedquat
Captain
Posts: 1881
Joined: Thu Mar 09, 2017 12:02 am

Re: Areas where you'd respectfully disagree with Chuck

Post by Riedquat »

Hero_Of_Shadows wrote: Sun Dec 30, 2018 8:54 pm I think part of the problem with AI stories in current science-fiction is that writers actually want to talk about subjects like slavery, racism, prejudice and etc and simply substitute X race from the RL with androids or holograms, as opposed to doing some research into real AI they just assume that AI's will be humans with metal/translucent skin.
They certainly lose track of it, and going for "what we've made is really no different from humans" is always a bit disappointing. It's far more interesting in science fiction to explore the radically different aspects. Mass Effect's geth looked promising for that for a while, but they unfortunately went for "making them more like people is an improvement" instead of really trying to explore the concept. There was a basis for an intelligence that functioned in an utterly alien way.
User avatar
BridgeConsoleMasher
Overlord
Posts: 11575
Joined: Tue Aug 28, 2018 6:18 am

Re: Areas where you'd respectfully disagree with Chuck

Post by BridgeConsoleMasher »

Riedquat wrote: Sun Dec 30, 2018 10:02 pm
Jonathan101 wrote: Sun Dec 30, 2018 8:09 pm
Riedquat wrote: Sun Dec 30, 2018 5:14 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.
Really? Sure, you can measure some physiological effects but that just describes some of the mechanism, it doesn't demonstrate that there's a fundamental difference.
You can look at the differences between what is causing these seemingly identical behaviours.

For example, Alexa can speak and she can hear, despite not having a mouth or ears, vocal cords or ear drums. Her "experience" of speaking and hearing- insofar as she has experiences, which she doesn't really- is fundamentally different from a human or any other living creature.

It would be no different if she was an android- they might imitate hearing and speech, but the mechanics are fundamentally different, as would be sight, sound and touch. A camera does not experience "sight".
The mechanics are fundamentally different but the big question is whether that matters or not. For a Trek example, Geordie can see, and something vaguely similar in reality isn't massively far-fetched (I'm sure I recall reading something about work on electrical eye implants a few years ago). The question is really just what experiencing something actually means, and whilst at a first glance it seems straightforward, the difference between a human's sight and a camera's (even a camera connected to a computer capable of processing the image and doing something in response) I don't think there is a clear-cut boundary. Is there through all forms of life, or even just all animals that respond to stimuli, including the simplest? I'm not sure that there's a point of fundamental change on the scale from the simplest machine to a human being, even though a lever is clearly utterly un-alive and unaware.

We've not built anything that can do more than very imperfectly mimc some aspects of human behaviour, and a human-like AI is as much in the realms of science fiction as it's always been - what we've got now seems to lie a long way on the "just a machine" side, but there's a big grey area in there.
But there is a distinctive boundary between animals and people. Animals are straight up property and otherwise protected by certain measures in a jurisdiction. What we haven't established aside from whether the AI is a person or not is if they're even an animal or not. And I don't believe that stimuli and response satisfies that.
..What mirror universe?
User avatar
Riedquat
Captain
Posts: 1881
Joined: Thu Mar 09, 2017 12:02 am

Re: Areas where you'd respectfully disagree with Chuck

Post by Riedquat »

BridgeConsoleMasher wrote: Sun Dec 30, 2018 10:57 pm But there is a distinctive boundary between animals and people.
Humans are another species of animal. We're certainly unique in some aspects, although the degree of that uniqueness isn't 100% clear; we keep finding that some other species can do some of the things we once thought only we did, such as make tools, even if no other species does the lot or is as capable.
Animals are straight up property and otherwise protected by certain measures in a jurisdiction.
That's fundamentally arbitrary treatment. In some times and places some humans have been straight up property. I don't want to get distracted by the ethical issues (I'm not saying slavery is OK, or that animal ownership is slavery) but it's not a useful distinction for this discussion.
What we haven't established aside from whether the AI is a person or not is if they're even an animal or not. And I don't believe that stimuli and response satisfies that.
It was about the experience of stimuli - sentience. Put your hand near a fire and you move it away because the experience of keeping it there is unpleasant. A robot with a heat sensor can do pretty much the same thing so the debate is where the difference lives.
User avatar
clearspira
Overlord
Posts: 5587
Joined: Sat Apr 01, 2017 12:51 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by clearspira »

Jonathan101 wrote: Sun Dec 30, 2018 4:00 pm
clearspira wrote: Sun Dec 30, 2018 3:23 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
clearspira wrote: Sun Dec 30, 2018 2:43 pm
Jonathan101 wrote: Sun Dec 30, 2018 12:05 pm
clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
And seeing as it is impossible to prove either way, surely the correct approach is to assume yes rather than no? Better to treat a 99% AI like an equal than assume a 100% AI is a fraud and keep him as a slave. Blade Runner is a good example of this.
And it is not just ethics, its the risk of them turning against us, not because they are evil but because they think that WE are evil.
And by the way, treating someone nice because you are afraid of what it will do to you if you don't is a terrible idea that sounds like selfish paranoia, especially if you are assuming that people (let alone A.I. ) can't tell the difference. It also suggests you seriously underestimate just how far advanced A.I. is going to be (or already IS) ahead of us- Blade Runner is an unrealistic future; real A.I. is more likely to end up the masters than the slaves.
The internet is the perfect counterargument to that line of reasoning. Given anonymity and the protection of a monitor, people treat each other like shit. MOST people when spoken to face to face with the threat of you punching them or blue flashing lights they do not act that way no matter what they feel like inside.
And that is even before we get into the fact that before there was a police force, people did what they damn well felt like with only the army or the taxman really exerting any authority over them at all. And if you want to say that we've moved on from then for some reason, look at places like Somalia that have no law and look at how life is for them.

Thus, yes, MOST people only treat you nice because they are afraid of the consequences. You may think that I am paranoid, I think you are in a state of naivety as to human nature. Perspective is an interesting mistress.
As someone who has been studying personality theory for the last few years and previously had a background in history, what I "think" is that I'm more knowledgeable on the subject than you are.

And my greater point was- if that is how you intend to treat A.I.- to be nice to them just because you are afraid of them- don't expect that not to backfire. You are concerned that they will turn against us if we mistreat them, but that assumes that they will be similar to humans rather than wholly alien (and provably so), and it also assumes they will be more subservient (and uniformly subservient) despite even existing A.I. being infinitely "smarter" than any human who ever lived.
You have absolutely no idea who I am, what my background is, or how educated I am. You can stuff your ''I am a genius and you are a pleb'' stick where only a proctologist can get at it mate. And for that matter, you could be making that up. On the internet uncitated qualifications mean precisely nothing. Done talking to you ''genius''.
User avatar
AllanO
Officer
Posts: 323
Joined: Mon Jan 22, 2018 10:38 pm
Contact:

Re: Areas where you'd respectfully disagree with Chuck

Post by AllanO »

Jonathan101 wrote: Sun Dec 30, 2018 9:23 pm The problem is that the arguments used to establish that he "is" or "isn't", or that he "might be" or "might not be", are being made by TNG characters who were written by sci-fi writers who didn't really know much about real A.I. or how it would actually develop in the first place, so they end up coming down to "we don't know how he works and therefore we don't know if he's sentient or not", whereas in real life we know almost exactly how A.I. works and wouldn't be able to create it if we didn't, and establishing whether or not A.I. is self-aware- Hard A.I. vs Soft A.I.- is one of the key debates in the field...and leaning heavily towards "no, it isn't".
Note that if we have some analytical theory of consciousness such that we can plug in the structure of a creature's neurons and say "conscious" or not, we would presumably have in that theory the necessary causal elements of consciousness and we can build analogous elements into our computers etc. In which case we have achieved hard AI.

So if (deliberate) hard AI is probably impossible we are never going to have such an analytic test. So therefore we are probably going to have computational and neurological theories of the kind where given X input and Y structure we get Z behaviours (such as "intelligent behaviour").

You suggest we will know whether something is really conscious or whatever by whether the causes of its behaviour are the same or different from human (or other genuinely conscious beings) behaviour. So there will always been differences between a neurological instantiated system and an electronic one, in that one will be electronic and the other based in biolelectric signaling, biochemicals and so on. It seems as though we will never have the magic formula defining the necessary elements of consciousness etc. so we can't say, with certainty, which differences are salient and which are superfluous. So some say they are never the same sort of cause and just make hard AI impossible from the start. The structural causes will never be the same so its not going to happen.

This position has the tricky point of how not to fall into solipsism (the view that there is only one mind mine) after all no two brains are exactly the same therefore it might be some particular aspect of my brain that is generating consciousness. Why is it wrong to conclude: everyone else has intelligent behaviour but has a different cause such that it also fails to generate consciousness or the like.

I would suggest we be way looser and just say if we don't know the computer (or other human being, alien etc.) is lying to us (or rather misleading us) we assume it is telling the truth about internal life, motivations, pains etc. So for example I know whatever a person on a videotape says about its internal life, motivations, pains etc. (given a truly bizarre set of coincidences a video tape could appear to be having a conversation with me) the causal structure of how videotapes mean all those pretty words tell me nothing about how they are generated and similarly a giant list of conditional statements in a computer program might generate any conversation, but if I know it is such a list I will not believe anything it says about an internal mental life other than that it is a list of conditionals and so on.

The thing is just because I don't know that the computer is lying does not mean its not. Very probably an AI will be generated by some sort of evolutionary or learning procedure an ludicrously complex structure of causes generated to conform to whatever constraints are put on it. I will have the code etc. but I won't now how it works until I do a lot of analysis. On first analysis I may see no evidence that this structure is generating lies or misleading talk etc. However at some future point I might learn part of that structure is some layer of misleading causes making it lie about its internal life. But unless we have the magic formula for consciousness I am not sure we can do much better and avoid solipsism.

Anyway evidence that Data has human like mental life. He does not seem to lie about his motivations, rather his behaviours line up with his stated goals etc. He was effected by the modified water in the Naked Now, in a way similar to humans were (not unlike alcohol intoxication) suggesting the causes of his behaviour are similar to the causes of human behaviour (so if we make the bold assumption humans are conscious then Data probably is). Note dogs have similar causes of their behaviour but those behaviours fail the functional test of being intelligent, so you still need to do functional/behavioural intelligence tests, so in fact a lot of the talk in Measure of Man seems on point.
Jonathan101 wrote: Sun Dec 30, 2018 8:09 pm A robot brain is wildly different from a human brain, so even though it achieves the same effects on the outside, what causes those effects is very different, and we do in fact know what those causes are.
There are lots of kinds of robot brains some of which can be very similar in causal structure to human brains from many angles, use a little imagination. For example there is an episode of DS9 where they temporarily fix some brain damage to Kira's hot priest boyfriend (Vedic Borial?) by replacing large parts of his brain with positronic implants. A similar classic science fiction idea is to replace a brain neuron by neuron with electronic equivalents (artificial neurons). Imagine such a brain made of billions of artificial neurons arguably that sounds like a robot brain to me, but perhaps you want it all on one machine. Well each neuron can be perfectly emulated by a computer program (Church-Turing thesis) replacing those neurons with the computer. Further with a more powerful computer I can emulate multiple neurons with a single computer. Finally with a super powerful computer I can emulate all those billions of neurons with a single computer and hey I have a robot brain that has exactly the same set of structural causes as the original squishy neurological brain. The electronic neurons and biological neurons have different causes for their behaviour but they seem inessential, we don't think, well you could replace 10% of the brain with artificial neurons but more than that and we would lose consciousness, do we?
Yours Truly,
Allan Olley

"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
User avatar
BridgeConsoleMasher
Overlord
Posts: 11575
Joined: Tue Aug 28, 2018 6:18 am

Re: Areas where you'd respectfully disagree with Chuck

Post by BridgeConsoleMasher »

Dogs do have intelligence of a toddler.

Damn I just watched the About a Girl review, and I didn't catch where he mentioned Measure of a Man.
..What mirror universe?
User avatar
AllanO
Officer
Posts: 323
Joined: Mon Jan 22, 2018 10:38 pm
Contact:

Re: Areas where you'd respectfully disagree with Chuck

Post by AllanO »

BridgeConsoleMasher wrote: Mon Dec 31, 2018 6:04 am Dogs do have intelligence of a toddler.

Damn I just watched the About a Girl review, and I didn't catch where he mentioned Measure of a Man.
Hard to say but it does not seem that the Federation gives dogs full rights as citizens, ability to join Star Fleet etc. like humans, Vulcans etc. which would be on point for the question of whether Data can refuse to submit to guy's planned examination, deconstruction...

As I recall the mention of Measure was just one or two sentences near the end of review, something like "Measure of a Man shows that courtroom drama can work in Science Fiction."
Yours Truly,
Allan Olley

"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Post Reply