Areas where you'd respectfully disagree with Chuck

This forum is for discussing Chuck's videos as they are publicly released. And for bashing Neelix, but that's just repeating what I already said.
User avatar
BridgeConsoleMasher
Overlord
Posts: 11574
Joined: Tue Aug 28, 2018 6:18 am

Re: Areas where you'd respectfully disagree with Chuck

Post by BridgeConsoleMasher »

Jonathan101 wrote: Thu Dec 27, 2018 8:06 pm
sayla0079 wrote: Thu Dec 27, 2018 7:56 pm I think one of the main problems with into darkness was it was supposed to be like wrath of kahn but the thing is wrath of kahn had a tv episode to set it up into darkness didn't.It's like any story you have to have a set up that makes sense.
I disagree. I saw Wrath of Khan long before I saw Space Seed, and generally understood and didn't hae much problem with it.

I think most people probably saw WoK without knowing about the prior episode and honestly, even many who had seen said episode might have forgotten it by the time they saw the film. There was 15 years between them and even the writer of the movie hadn't seen the episode before he was hired to write the script, and he just binge watched Trek and chose Khan at random to be the villain in question.

Granted, it changes the dynamic a bit if Kirk and Khan don't have this prior history, but it could easily have been done justice. Ironically ID is as much an adaptation of that episode as it is the movie.
I never saw Space Seed, and watched Wrath of Khan with a novice experience with Trek. I watched Wrath of Khan in the theatre a couple years later after becoming more familiar with Trek (still not having seen the TOS episode), and I'm still not that taken away by it. It's certainly well structured for any consideration of filming, especially compared to the other movies.

With Into Darkness though, I don't even translate that Khan to the original. It's pretty derivative.
..What mirror universe?
Jonathan101
Captain
Posts: 853
Joined: Mon Apr 02, 2018 12:04 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Jonathan101 »

clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
User avatar
clearspira
Overlord
Posts: 5587
Joined: Sat Apr 01, 2017 12:51 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by clearspira »

Jonathan101 wrote: Sun Dec 30, 2018 12:05 pm
clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
And seeing as it is impossible to prove either way, surely the correct approach is to assume yes rather than no? Better to treat a 99% AI like an equal than assume a 100% AI is a fraud and keep him as a slave. Blade Runner is a good example of this.
And it is not just ethics, its the risk of them turning against us, not because they are evil but because they think that WE are evil.
Jonathan101
Captain
Posts: 853
Joined: Mon Apr 02, 2018 12:04 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Jonathan101 »

clearspira wrote: Sun Dec 30, 2018 2:43 pm
Jonathan101 wrote: Sun Dec 30, 2018 12:05 pm
clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
And seeing as it is impossible to prove either way, surely the correct approach is to assume yes rather than no? Better to treat a 99% AI like an equal than assume a 100% AI is a fraud and keep him as a slave. Blade Runner is a good example of this.
And it is not just ethics, its the risk of them turning against us, not because they are evil but because they think that WE are evil.
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.

In point of fact, you cannot create an A.I. without knowing how it works in the first place, and in turn the arguments used in episodes like "Measure of a Man" (which admittedly has the excuse that the guy who created Data is dead and nobody is sure how he works) are just as bad if not worse as the ones used in "About a Girl".

And by the way, treating someone nice because you are afraid of what it will do to you if you don't is a terrible idea that sounds like selfish paranoia, especially if you are assuming that people (let alone A.I. ) can't tell the difference. It also suggests you seriously underestimate just how far advanced A.I. is going to be (or already IS) ahead of us- Blade Runner is an unrealistic future; real A.I. is more likely to end up the masters than the slaves.

And furthermore you are illustrating my point- you are anthropomorphising androids as if they have feelings and care about being mistreated, when in reality the best we are likely to accomplish is to imitate feelings and yes, we will be 100% capable of telling the difference. It might not matter though since whether or not artificial intelligence is self-aware won't affect whether or not it can respond to or play off of our emotions- the most unrealistic thing about Data is his difficulty in reading human emotions, when right now we are developing A.I. that can do so better than 99% of the human race.
User avatar
clearspira
Overlord
Posts: 5587
Joined: Sat Apr 01, 2017 12:51 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by clearspira »

Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
clearspira wrote: Sun Dec 30, 2018 2:43 pm
Jonathan101 wrote: Sun Dec 30, 2018 12:05 pm
clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
And seeing as it is impossible to prove either way, surely the correct approach is to assume yes rather than no? Better to treat a 99% AI like an equal than assume a 100% AI is a fraud and keep him as a slave. Blade Runner is a good example of this.
And it is not just ethics, its the risk of them turning against us, not because they are evil but because they think that WE are evil.
And by the way, treating someone nice because you are afraid of what it will do to you if you don't is a terrible idea that sounds like selfish paranoia, especially if you are assuming that people (let alone A.I. ) can't tell the difference. It also suggests you seriously underestimate just how far advanced A.I. is going to be (or already IS) ahead of us- Blade Runner is an unrealistic future; real A.I. is more likely to end up the masters than the slaves.
The internet is the perfect counterargument to that line of reasoning. Given anonymity and the protection of a monitor, people treat each other like shit. MOST people when spoken to face to face with the threat of you punching them or blue flashing lights they do not act that way no matter what they feel like inside.
And that is even before we get into the fact that before there was a police force, people did what they damn well felt like with only the army or the taxman really exerting any authority over them at all. And if you want to say that we've moved on from then for some reason, look at places like Somalia that have no law and look at how life is for them.

Thus, yes, MOST people only treat you nice because they are afraid of the consequences. You may think that I am paranoid, I think you are in a state of naivety as to human nature. Perspective is an interesting mistress.
Jonathan101
Captain
Posts: 853
Joined: Mon Apr 02, 2018 12:04 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Jonathan101 »

clearspira wrote: Sun Dec 30, 2018 3:23 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
clearspira wrote: Sun Dec 30, 2018 2:43 pm
Jonathan101 wrote: Sun Dec 30, 2018 12:05 pm
clearspira wrote: Sun Dec 30, 2018 11:50 am
Jonathan101 wrote: Sun Dec 30, 2018 11:41 am I found it a little funny that he contrasted "About a Girl" (Orville) with "Measure of a Man" (TNG) as an example of a bad sci-fi trial with a good sci-fi trial because the former had bad arguments.

The truth is, "MoaM" might have seemed intelligent in it's own day, but in the present time if you know the difference between Hard A.I. and Soft A.I., not a single thing in that episode really shows that Data a is self-aware or qualifies as a sentient being.

In fact in the history of TNG probably the only thing that looks like strong evidence that he is sentient is the episode where he dreams Trop is a cake, and even that is something that he and he alone could have experienced. Almost everything else he says or does in Trek could be explained by him simply being a really advanced imitation of sentience, and honestly by our own near-future standards he might even become pretty primitive.

I'll extend that criticism to the Voyager episode where the Holodeck village runs too long and causes problems; Chuck acts like this is totally different from Data or the Doctor because they are self-aware and the Holodeck programs are not...except, the evidence that the Doctor or Data are self-aware is pretty scant.

Seems like a lot of sci-fi fans aren't always aware that artificial intelligence can look, sound and act 100% self-aware...yet not be self-aware, probably because the creators of those works like to say or at least strongly imply that they are. The reality is that, in real life, the capacity of A.I. to "fake it" really is just THAT good, and you need a higher standard of evidence than presented.
I would argue that the moment you can prove that AI is approaching the level where it can feel pain or suffering (either physically or mentally) then we have a duty to it because inflicting pain or suffering on anything is amoral. And this is true of Data and the Doctor. Whether they are truly alive or not is irrelevant to me.
I think you misunderstand.

The point isn't to prove whether or not they are alive (they aren't, not by organic definitions).

The point is to prove whether or not they are actually feeling pain and suffering or simply behaving as though they are.
And seeing as it is impossible to prove either way, surely the correct approach is to assume yes rather than no? Better to treat a 99% AI like an equal than assume a 100% AI is a fraud and keep him as a slave. Blade Runner is a good example of this.
And it is not just ethics, its the risk of them turning against us, not because they are evil but because they think that WE are evil.
And by the way, treating someone nice because you are afraid of what it will do to you if you don't is a terrible idea that sounds like selfish paranoia, especially if you are assuming that people (let alone A.I. ) can't tell the difference. It also suggests you seriously underestimate just how far advanced A.I. is going to be (or already IS) ahead of us- Blade Runner is an unrealistic future; real A.I. is more likely to end up the masters than the slaves.
The internet is the perfect counterargument to that line of reasoning. Given anonymity and the protection of a monitor, people treat each other like shit. MOST people when spoken to face to face with the threat of you punching them or blue flashing lights they do not act that way no matter what they feel like inside.
And that is even before we get into the fact that before there was a police force, people did what they damn well felt like with only the army or the taxman really exerting any authority over them at all. And if you want to say that we've moved on from then for some reason, look at places like Somalia that have no law and look at how life is for them.

Thus, yes, MOST people only treat you nice because they are afraid of the consequences. You may think that I am paranoid, I think you are in a state of naivety as to human nature. Perspective is an interesting mistress.
As someone who has been studying personality theory for the last few years and previously had a background in history, what I "think" is that I'm more knowledgeable on the subject than you are.

And my greater point was- if that is how you intend to treat A.I.- to be nice to them just because you are afraid of them- don't expect that not to backfire. You are concerned that they will turn against us if we mistreat them, but that assumes that they will be similar to humans rather than wholly alien (and provably so), and it also assumes they will be more subservient (and uniformly subservient) despite even existing A.I. being infinitely "smarter" than any human who ever lived.
User avatar
Riedquat
Captain
Posts: 1881
Joined: Thu Mar 09, 2017 12:02 am

Re: Areas where you'd respectfully disagree with Chuck

Post by Riedquat »

Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.
Really? Sure, you can measure some physiological effects but that just describes some of the mechanism, it doesn't demonstrate that there's a fundamental difference.
Jonathan101
Captain
Posts: 853
Joined: Mon Apr 02, 2018 12:04 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Jonathan101 »

Riedquat wrote: Sun Dec 30, 2018 5:14 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.
Really? Sure, you can measure some physiological effects but that just describes some of the mechanism, it doesn't demonstrate that there's a fundamental difference.
You can look at the differences between what is causing these seemingly identical behaviours.

For example, Alexa can speak and she can hear, despite not having a mouth or ears, vocal cords or ear drums. Her "experience" of speaking and hearing- insofar as she has experiences, which she doesn't really- is fundamentally different from a human or any other living creature.

It would be no different if she was an android- they might imitate hearing and speech, but the mechanics are fundamentally different, as would be sight, sound and touch. A camera does not experience "sight".

A.I. is just software more than it is hardware, and living organisms are not a bunch of 1s and 0s. A robot is just a glorified bot, and even though the human brain is often compared to a computer as a useful metaphor, they are actually very different.

An android might look human but that doesn't mean that it is human; by the same token, just because it looks and acts self-aware doesn't mean that it is self-aware.

A robot brain is wildly different from a human brain, so even though it achieves the same effects on the outside, what causes those effects is very different, and we do in fact know what those causes are.
User avatar
BridgeConsoleMasher
Overlord
Posts: 11574
Joined: Tue Aug 28, 2018 6:18 am

Re: Areas where you'd respectfully disagree with Chuck

Post by BridgeConsoleMasher »

Jonathan101 wrote: Sun Dec 30, 2018 8:09 pm
Riedquat wrote: Sun Dec 30, 2018 5:14 pm
Jonathan101 wrote: Sun Dec 30, 2018 3:07 pm
It's not impossible- it's the opposite of impossible. Anyone with a decent knowledge of biology, neuoroscience and / or A.I. can articulate the differences between them.
Really? Sure, you can measure some physiological effects but that just describes some of the mechanism, it doesn't demonstrate that there's a fundamental difference.
You can look at the differences between what is causing these seemingly identical behaviours.

For example, Alexa can speak and she can hear, despite not having a mouth or ears, vocal cords or ear drums. Her "experience" of speaking and hearing- insofar as she has experiences, which she doesn't really- is fundamentally different from a human or any other living creature.

It would be no different if she was an android- they might imitate hearing and speech, but the mechanics are fundamentally different, as would be sight, sound and touch. A camera does not experience "sight".

A.I. is just software more than it is hardware, and living organisms are not a bunch of 1s and 0s. A robot is just a glorified bot, and even though the human brain is often compared to a computer as a useful metaphor, they are actually very different.

An android might look human but that doesn't mean that it is human; by the same token, just because it looks and acts self-aware doesn't mean that it is self-aware.

A robot brain is wildly different from a human brain, so even though it achieves the same effects on the outside, what causes those effects is very different, and we do in fact know what those causes are.
Back to the point of your original argument, what's so bad about MoaM? Did the episode actually set out to establish Data's self awareness? Because it seems as if they decided the case based on potential ramifications of Starfleet adopting Data and to which extent they could acknowledge his autonomy.
..What mirror universe?
User avatar
Hero_Of_Shadows
Officer
Posts: 105
Joined: Tue Dec 26, 2017 3:54 pm

Re: Areas where you'd respectfully disagree with Chuck

Post by Hero_Of_Shadows »

I think part of the problem with AI stories in current science-fiction is that writers actually want to talk about subjects like slavery, racism, prejudice and etc and simply substitute X race from the RL with androids or holograms, as opposed to doing some research into real AI they just assume that AI's will be humans with metal/translucent skin.
Post Reply