Hmm actually what would Voyager be like if they did eventually find another medical professional for the ship.clearspira wrote: ↑Tue Dec 22, 2020 8:24 pmI agree with you about mining. In fact, I would say that Star Trek holograms are far superior to androids as it is impossible to damage one. You can bury a hologram under a thousand tons of rubble or fire phasers at him all day and as long as the holo-emitter is doing fine so will he.Link8909 wrote: ↑Tue Dec 22, 2020 11:34 am Really enjoyed this look at Holographic ethics, thank you Chuck for reposting and updating the video.
I think there might be some good reasons for using Holograms for Mining, they might have had something similar to the Holographic Projector that was beamed down to the planet in "Flesh and Blood", and not only with the factor of Holograms not being hindered by thing that humans would be, but if something happened it would be a matter of simply transferring them out.Senko wrote: ↑Mon Dec 21, 2020 8:04 am I do feel compelled to point out there's been plenty of hints throughout the series that the star trek computers may be self aware to some degree or another. Controlling flight, giving birth, gel packs, having sex with Giordi "rolls eyes". So the question of whether holograms can be considered sentient/sapient when the computer itself isn't needs to address the question of whether the computer is and is just bounded by its own restrictions and programs a self awareness that only comes out in rare cases like Moriarty.
I too would be interested in seeing this discussion addressed to other series like Red Dwarf and their holograms.
The question people seem mostly to be ignoring in my opinion is not whether holograms are sapient but WHY was the EMH MK 1's assigned to hand mine Dilithium. This can't be an efficient means of doing so especially with the technology available you'd have to install holographic emitters in every single mine shaft to allow them to operate there at least till the mobile emitter became common place. Add to that these are medical programs not mining programs this is not an area where they have a valid skill set
So why were they assigned to this job as opposed to just being deleted in batch lots? If they aren't self aware and sapient then deleting them is no more harmful than deleting any other hologram yet starfleet didn't delete them when they became obsolete, they didn't even turn them off. Instead they assigned them to perform other tasks while leaving them running. The only reason I can see is that they are considered sapient beings who deletion would be the equivilent of killing, or bad writing.
However like you said the real issue is why the EMH, why put what was the greatest breakthrough in Holographic technology and what was the most sophisticated piece of medical hardware to work in the most menial of tasks, why not simply make a basic Hologram like those in "Flesh and Blood", and if it's a matter of the Mark I being obsolete, why not update them?
I personally think though that there is a single reason alone why the EMH ended up mining dilithium: the director thought that it would make for a cool closing shot to have fifty Doctors in a cave. I don't think any worldbuilding was really considered.
Tbh though this also leads into something that has always bothered me: why was the EMH being grumpy ever a problem? Why was ''Extremely Marginal Housecalls'' ever a nickname? The Doctor was designed to be only occasionally turned on. That is why he literally opens up with the line ''please state the nature of the medical emergency''. His first scenes in ''Caretaker'' where is activated because the original medical crew had been killed is exactly the kind of situation that he was designed for and in a normal situation Voyager would have flown to a starbase for replacements and the Doctor would have been turned off again.
Honestly - I think the Federation must be full of overly-sensitive whining crybabies if having a surly yet temporary hologram treat you is enough to bother you to the point that you are willing to withdraw the whole line from service. Especially as PIC would later show that within a few years of this there is a mass production medical hologram in service.
A Look at Holograms and Ethics
-
- Captain
- Posts: 3742
- Joined: Thu Feb 28, 2019 2:22 pm
Re: A Look at Holograms and Ethics
Re: A Look at Holograms and Ethics
We saw on Deep Space 9 that the EMH's creator was working to create a new model of doctor hologram, one that could serve more long term needs, and he was searching for someone other than himself as a model for its appearance and personality. I'm guessing, once a non-grumpy version of the EMH became available, people were switching over immediately, and they ditched the old model like a stack of VHS tapes.
Re: A Look at Holograms and Ethics
I think there are a few reasons for this and I will skip the meta ones.Link8909 wrote: ↑Tue Dec 22, 2020 11:34 am
I think there might be some good reasons for using Holograms for Mining, they might have had something similar to the Holographic Projector that was beamed down to the planet in "Flesh and Blood", and not only with the factor of Holograms not being hindered by thing that humans would be, but if something happened it would be a matter of simply transferring them out.
However like you said the real issue is why the EMH, why put what was the greatest breakthrough in Holographic technology and what was the most sophisticated piece of medical hardware to work in the most menial of tasks, why not simply make a basic Hologram like those in "Flesh and Blood", and if it's a matter of the Mark I being obsolete, why not update them?
One part is that there has been a sign that all is not sunshine in the Federation. To give the comparison to the US is has this ideal it believes in. But under the shiny surface there is still poverty and bigotry. In Trek we see this for example in Measure of a Man where Data has to be proven not just a machine. Pulaski considering him some 'thing'. And even during the klingon civil war when he is given a command and the acting CO tries to be reassigned because he can't serve under a machine. Working with Data should be no worse than working with a Vulcan. But he is dismissed from that line of thought because, machine. And at least twice Picard has to be reminded of his own short comings on the life form front. Both Measure of a Man when Data brings up Geordi's visor. And said civil war when he asks why of all senior staff he was not selected. And Picard, the beacon of rights, had to do a double take on his own actions or inaction's.
So people of the Federation looking down on yet another 'tool' is seen. Zimmerman even says some of his EMH's are being used to scrub plasma conduits. Why? What benefit is there on that? And how have so many ships and bases needed to even turn their EMH on to decide this?
Then there is a variance in the EMH to begin with. Each one seems to have a variable amount of competency. The Equinox crew commenting that theirs could not hold tools properly.
So they get repurposed.
Now comes the has to go meta. (Sorry) Mining and cleaning plasma conduits seems idiotic since you need emitters where ever they are supposed to go. So the writers put those in to show how difficult the holograms can have it. Without thinking how ridiculous it is to repurpose them like that. Toss them in as holonovel characters. Or holodeck maintenance. Okay. Scrubbing sickbay, yup. But without the portable emitter it does not work elsewhere.
Re: A Look at Holograms and Ethics
Wow this is a fun discussion.
We know form that Voyager that you can replace working lungs with a holographic ones, presumably we can replace neurons with holo-neurons likewise, so if someone say was having a degenerative nerve condition with neurons slowly dying you could replace each one with a holo-neuron and presumably the person's behaviour and biological function would never change. I guess some people think that as neurons get replaced by holo-neurons somehow the person would go from being sapient to non-sapient despite nothing changing, this is really unconvincing and unmotivated to me.
The fact that a given computer which is running them is not itself sapient (or not sapient in the same way at least), is no problem. The atoms that make up your body are not intrinsically sapient (or at least not intrinsically your sapience) since they were happily the atoms of untold numbers of other people through history. Nothing you do can change the basic way the atoms that make you up behave, they will behave the same way (obey the same laws of physics). Now the particular combination of atoms in series in time is in some way sufficient to be you, but why not then the right combination of computing actions are in someway the hologram.
Things get hairier when we consider is there really a difference in kind between say the low level behaviour generating functions of a machine by say using artifical neurons and more streamlined functional structures that get the same behaviour and function.
Basically I am sure that in principle any characteristic of human beings can be achieved in a machine, so either human beings are NOT sapient or machines can be. It is really tricky to tell how holograms and other Star Trek AIs are supposed to work so I see no basis on which you could really rule out there sapience (unless we want to deny human sapience, maybe stopping human hearts is no more murder than turning off a lamp). Any physical characteristic could certainly be achieved to any level of specificity and detail you like and heck why not any non-physical property, if there is some non-physical soul stuff say whose to say Star Trek technology does not work by manipulating that? Also if you are invoking some kind of non-physical stuff or even physical stuff you don't understand and arbitrarily saying no machine could do that why not say brunettes lack it say (brunettes don't have a soul etc.), sounds about as equally motivated, if the premise is we just don't understand and so can't describe the principle involved and say who it applies to or not, saying it can not apply to brunettes makes as much sense as saying it can't apply to machines, to me anyway.
Note often you can tell a machine is not sapient (or at least not sapient in the same way as a human), as in the example invoked elsewhere of a recording. A recording looks exactly like the original (from a certain point of view), but in this case we know exactly why it is not sapient like the thing recorded, first just on a functional level it just says the same thing no matter what, often we can take apart the recording and examine the tape or whatever and see the process by which the words it says and so on are formed and realize they are nothing like the causal process that makes the words in the original instance. Even if you take a bunch of recordings and have different ones play in response to prompts, that will be harder to catch out in a conversation (in terms of function, theoretically even with a finite number of recordings [say a recording of every phoneme in English] you could string them together into an infinite number of conversations so we would never catch the machine out as failing to be original or whatever in the conversation), but it is easy to analyze and dissect the causes of that things behaviour and see why it differs, you could essentially break it open and find every conversation tree, that does not seem to be the case with the cause of human beings having conversations, the options are not already stuck in the brain waiting for the right stimulus, they are structures generated by the structure of the brain. However when we get into possibilities like the machine just having artificial neurons (little bits of matter with the same kind of causal powers as all natural organic neurons) and it generating conversations by the combined action of those neurons, then it just seems to me impossible to say what makes the machines conversation different from a human, either they are both sapient or neither is sapient.
I always think a better definition of what people tend to mean by free will requires determinism. Essentially what most people worry about with free will is whether people are responsible for and in control of their actions.
Well if we hold people responsible for their actions, it is because we think there is something about them that led to that action, there character, thoughts and so on determined their actions. If people just unpredictably did whatever (or even one of two options etc.) it would make no sense to hold them responsible for their actions as nothing about them led to that action.
Likewise if the deterministic nature of things means we don't control our actions it would mean that thermostats don't control the temperature in the room, which is a deep misunderstanding of the word control. Determinism is precisely what makes thermostats control the temperature in the room and likewise is just what I need to control my actions.
Note being unpredictable (whether due to quantum randomness or due to the feed back effect that you can yourself make predictions and so thwart those predictions etc.), not simply obeying stereotyped programming and so on, is not unique to humans or hard and does not make you intelligent or sapient or whatever. A thing that is too inflexible in its behaviour, thoughts and so on too predictable and limited may fail to be intelligent that does not mean flexibility is the same thing as intelligence. The things that make a human mind a mind are complicated, admit of endless degrees and so on. So sure lots of things will appear to be sapient that are not, they will have a lot of the characteristics enough to seem like it for awhile, but not enough (likewise some human beings under the influence of drugs or a disease will not really be sapient/aware but merely sleep walking or the like). However that is why I find the machines just can't be sapient argument bizarre, it is not like there is one little thing you can just add or remove to the way a being is and behaves that turns it from really sapient to just an incredible simulation as far as I can see, so how could you uniquely identify what machines must lack to make the argument in the first place.
A problem is that how things like holograms work is all over the place. Sometimes it is indeed treated as computer puppetry, it is a human being talking yet it could as easily be a desk lamp, there is nothing about the thing doing the talking directing what is said, it is just a recording and so on. However sometimes the implication is the holograms are actually complicated low level simulations of the human beings projected. This is apparent in say Picard where the various holograms on the ship each have bits and pieces of the captain's personality and memory because they are all based on brain scans of him. So it seems like they actually have little holographic neurons and their behaviour is the result of the interactions between each of those little holo-neurons all working together to create the aggregate effect of high level behaviour, just as in old fashioned human beings.CrypticMirror wrote: ↑Sat Dec 19, 2020 8:13 pm I can't see holograms as people. They are elaborate interface elements between a computer and a person, a glorified version of Microsoft's Clippy. And they are all, even the Doc from Voyager, nothing more than that. If they are sapient, then it is the ship's computer behind them which is sapient and not the interface. None of them can be evolving beyond their programming, because they are always a product of the programming of the computer behind them. There seems to be no expectation that the Voyager computer is alive, or the computer behind Quark's holosuites, and that means that any degree of personhood is something we the user projects onto them.
Just like the droids from Star Wars. They are appliances and interfaces, given the simulation of personalities for ease of user interaction, but nothing more than that. Even the Moriarty is just lines of code that have been edited to remove their perception filter, and trawl from beyond their original library for additional interactions. Delete his programme, and it would be no more murder than turning off a lamp.
We know form that Voyager that you can replace working lungs with a holographic ones, presumably we can replace neurons with holo-neurons likewise, so if someone say was having a degenerative nerve condition with neurons slowly dying you could replace each one with a holo-neuron and presumably the person's behaviour and biological function would never change. I guess some people think that as neurons get replaced by holo-neurons somehow the person would go from being sapient to non-sapient despite nothing changing, this is really unconvincing and unmotivated to me.
The fact that a given computer which is running them is not itself sapient (or not sapient in the same way at least), is no problem. The atoms that make up your body are not intrinsically sapient (or at least not intrinsically your sapience) since they were happily the atoms of untold numbers of other people through history. Nothing you do can change the basic way the atoms that make you up behave, they will behave the same way (obey the same laws of physics). Now the particular combination of atoms in series in time is in some way sufficient to be you, but why not then the right combination of computing actions are in someway the hologram.
Things get hairier when we consider is there really a difference in kind between say the low level behaviour generating functions of a machine by say using artifical neurons and more streamlined functional structures that get the same behaviour and function.
Basically I am sure that in principle any characteristic of human beings can be achieved in a machine, so either human beings are NOT sapient or machines can be. It is really tricky to tell how holograms and other Star Trek AIs are supposed to work so I see no basis on which you could really rule out there sapience (unless we want to deny human sapience, maybe stopping human hearts is no more murder than turning off a lamp). Any physical characteristic could certainly be achieved to any level of specificity and detail you like and heck why not any non-physical property, if there is some non-physical soul stuff say whose to say Star Trek technology does not work by manipulating that? Also if you are invoking some kind of non-physical stuff or even physical stuff you don't understand and arbitrarily saying no machine could do that why not say brunettes lack it say (brunettes don't have a soul etc.), sounds about as equally motivated, if the premise is we just don't understand and so can't describe the principle involved and say who it applies to or not, saying it can not apply to brunettes makes as much sense as saying it can't apply to machines, to me anyway.
Note often you can tell a machine is not sapient (or at least not sapient in the same way as a human), as in the example invoked elsewhere of a recording. A recording looks exactly like the original (from a certain point of view), but in this case we know exactly why it is not sapient like the thing recorded, first just on a functional level it just says the same thing no matter what, often we can take apart the recording and examine the tape or whatever and see the process by which the words it says and so on are formed and realize they are nothing like the causal process that makes the words in the original instance. Even if you take a bunch of recordings and have different ones play in response to prompts, that will be harder to catch out in a conversation (in terms of function, theoretically even with a finite number of recordings [say a recording of every phoneme in English] you could string them together into an infinite number of conversations so we would never catch the machine out as failing to be original or whatever in the conversation), but it is easy to analyze and dissect the causes of that things behaviour and see why it differs, you could essentially break it open and find every conversation tree, that does not seem to be the case with the cause of human beings having conversations, the options are not already stuck in the brain waiting for the right stimulus, they are structures generated by the structure of the brain. However when we get into possibilities like the machine just having artificial neurons (little bits of matter with the same kind of causal powers as all natural organic neurons) and it generating conversations by the combined action of those neurons, then it just seems to me impossible to say what makes the machines conversation different from a human, either they are both sapient or neither is sapient.
I agree that machines and humans have (in principle) the same potential for free will, but I am more of the opinion that it totally is a thing that exists.Fianna wrote: ↑Sat Dec 19, 2020 9:56 pm If free will is the standard you're using, that's a problem, because free will doesn't actually exist.
Even for us humans, our decisions are just the result of chemical reactions and electrical impulses; under a sufficiently thorough analysis, everything we do is as predictable and mechanical as a lamp turning on when it hears clapping.
I always think a better definition of what people tend to mean by free will requires determinism. Essentially what most people worry about with free will is whether people are responsible for and in control of their actions.
Well if we hold people responsible for their actions, it is because we think there is something about them that led to that action, there character, thoughts and so on determined their actions. If people just unpredictably did whatever (or even one of two options etc.) it would make no sense to hold them responsible for their actions as nothing about them led to that action.
Likewise if the deterministic nature of things means we don't control our actions it would mean that thermostats don't control the temperature in the room, which is a deep misunderstanding of the word control. Determinism is precisely what makes thermostats control the temperature in the room and likewise is just what I need to control my actions.
Note being unpredictable (whether due to quantum randomness or due to the feed back effect that you can yourself make predictions and so thwart those predictions etc.), not simply obeying stereotyped programming and so on, is not unique to humans or hard and does not make you intelligent or sapient or whatever. A thing that is too inflexible in its behaviour, thoughts and so on too predictable and limited may fail to be intelligent that does not mean flexibility is the same thing as intelligence. The things that make a human mind a mind are complicated, admit of endless degrees and so on. So sure lots of things will appear to be sapient that are not, they will have a lot of the characteristics enough to seem like it for awhile, but not enough (likewise some human beings under the influence of drugs or a disease will not really be sapient/aware but merely sleep walking or the like). However that is why I find the machines just can't be sapient argument bizarre, it is not like there is one little thing you can just add or remove to the way a being is and behaves that turns it from really sapient to just an incredible simulation as far as I can see, so how could you uniquely identify what machines must lack to make the argument in the first place.
Last edited by AllanO on Sun Dec 27, 2020 8:19 pm, edited 1 time in total.
Yours Truly,
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Re: A Look at Holograms and Ethics
That's fair, and while I did propose reasons for why they'd use Holograms, the drawbacks you mentioned are just as valid, and ultimately the issue is repurposing the EMH Mark I's in such a menial tasks, it's like using your old iPhone as a hammer, honestly Star Trek Picard shows a more reasonable idea for obsolete EMH's by having them for civilian use while Starfleet gets the latest model.Nealithi wrote: ↑Wed Dec 23, 2020 11:09 amI think there are a few reasons for this and I will skip the meta ones.Link8909 wrote: ↑Tue Dec 22, 2020 11:34 am
I think there might be some good reasons for using Holograms for Mining, they might have had something similar to the Holographic Projector that was beamed down to the planet in "Flesh and Blood", and not only with the factor of Holograms not being hindered by thing that humans would be, but if something happened it would be a matter of simply transferring them out.
However like you said the real issue is why the EMH, why put what was the greatest breakthrough in Holographic technology and what was the most sophisticated piece of medical hardware to work in the most menial of tasks, why not simply make a basic Hologram like those in "Flesh and Blood", and if it's a matter of the Mark I being obsolete, why not update them?
One part is that there has been a sign that all is not sunshine in the Federation. To give the comparison to the US is has this ideal it believes in. But under the shiny surface there is still poverty and bigotry. In Trek we see this for example in Measure of a Man where Data has to be proven not just a machine. Pulaski considering him some 'thing'. And even during the klingon civil war when he is given a command and the acting CO tries to be reassigned because he can't serve under a machine. Working with Data should be no worse than working with a Vulcan. But he is dismissed from that line of thought because, machine. And at least twice Picard has to be reminded of his own short comings on the life form front. Both Measure of a Man when Data brings up Geordi's visor. And said civil war when he asks why of all senior staff he was not selected. And Picard, the beacon of rights, had to do a double take on his own actions or inaction's.
So people of the Federation looking down on yet another 'tool' is seen. Zimmerman even says some of his EMH's are being used to scrub plasma conduits. Why? What benefit is there on that? And how have so many ships and bases needed to even turn their EMH on to decide this?
Then there is a variance in the EMH to begin with. Each one seems to have a variable amount of competency. The Equinox crew commenting that theirs could not hold tools properly.
So they get repurposed.
Now comes the has to go meta. (Sorry) Mining and cleaning plasma conduits seems idiotic since you need emitters where ever they are supposed to go. So the writers put those in to show how difficult the holograms can have it. Without thinking how ridiculous it is to repurpose them like that. Toss them in as holonovel characters. Or holodeck maintenance. Okay. Scrubbing sickbay, yup. But without the portable emitter it does not work elsewhere.
Something I like that gets overlooked is how Kes essentially helped the Doctor become sentient, he was basically a blank slate at the start of the series, and as you said people in Star Trek do have this mindset of Holograms being just really, really advanced NPCs, but Kes didn't have that mindset and treated the Doctor as a real person, and asked him questions about him that other wouldn't have, and because not only was he a black state and designed to learn such as his original idea of creating Holographic Lungs, he started to develop his own personality based on these questions.
"I think, when one has been angry for a very long time, one gets used to it. And it becomes comfortable like…like old leather. And finally… it becomes so familiar that one can't remember feeling any other way."
- Jean-Luc Picard
- Jean-Luc Picard
Re: A Look at Holograms and Ethics
I'm imagining EMHs having to scrub the holodeck clean after each sex-crazed orgy simulation. Now that is a pitiable fate.Nealithi wrote: ↑Wed Dec 23, 2020 11:09 am So the writers put those in to show how difficult the holograms can have it. Without thinking how ridiculous it is to repurpose them like that. Toss them in as holonovel characters. Or holodeck maintenance. Okay. Scrubbing sickbay, yup. But without the portable emitter it does not work elsewhere.
Re: A Look at Holograms and Ethics
When Star Trek says "hologram" they really mean "Artificial Intelligence Program".
It's super easy to make a program that reacts: if you say 'hello' the program will say 'hello' back. And you can even get some false vague not really intelligence by having the program as you a follow up question from a list like "how are you feeling?" or "what do you have planned for today?" This program is limited to doing things that were programed into it. And it's limited to only a couple things, and you will notice this very quickly. We have programs like this in 2020.
The next, but hard step, is giving the program lots and lots of layers. Tons and tons of reacts. If done right you can get a false vague bad simulation of not really intelligence. You can have "close" to a real conversation: no matter what you say the program holo/android will pick a response from it's list of dozens of responses. It might for a bit "feel real"....but it's not. Still the program is just picking from pre-programed responses. We have nothing like this in 2020. This covers all computers we see in Star Trek and all 'normal' holograms.
But having the program, if in an android or hologram know WHY it "should" say hello when it meets someone is a huge hurdle. This is the big, huge, nearly impossible leap. This is...somehow....that a program has so much "programing" that it can "somehow" think for itself. It's not just reacting and picking pre programed inputs....it is creating new contemt. Data, Vic Fontain and The Doctor are here.
So all Federation holograms are in that second category: they "seem" real, only if you pretend that they are...they are complex illusions.
Moruraity-is one of the one in a million mistakes. The ton of type two programing that made the "leap" to type three. An intelligence that is not just following a program, but can think for itself. It does not get stated, but I think Moruraity was a big leap to making seneint self aware programs. The "proto progaming" was there, but needed just a tiny spark to connect it. Also, they never say so....but I doubt Moruraity was unique: I bet dozens of such holo people were created in the mid 24th century.
And Vic Fontain is a perfect example. He is a new type three program running a hologram.
And so is the EMH...sort of. The EMH is a first draft, made for short term useage. It has an early 'protype' of the type three program, but it's made to only be a doctor. All of the EMH's self awareness is directed at being a doctor. It can disgisinose medical problems and use it's database to think up of and create new soulutions: but ONLY medical things. As we clearly see the EMH has very little personality or bed side manner: it's just a pure doctor.
Voyager tells us a couple times that the crew ADDED programs to the EMH. Likely whatever 'type three' programs they had or could write. But as a couple epsiodes highlight the core prototype 'type three' program could not handel all the editions. Until the Voyager EMH gets super close to being destroied.
All the EMH I's back in the alpha quadrant failed big time. Though none were likely run for more then a couple hours at most and never had any addational programs added to them. So they are not "enslaved" as they are not quite type three programs: they are no diffrent then a tricorder except they look human.
The EMH Mark Two is clearly a new type three program, and no doubt so is the Third.
And...well that takes us to Picard. Sigh. Within a couple years after Voyager Endgame the type three progam became sligtly less then common with the Federation having "Data like" androids and holograms. Untill the whole Ban thing...balh blah blah.
It's super easy to make a program that reacts: if you say 'hello' the program will say 'hello' back. And you can even get some false vague not really intelligence by having the program as you a follow up question from a list like "how are you feeling?" or "what do you have planned for today?" This program is limited to doing things that were programed into it. And it's limited to only a couple things, and you will notice this very quickly. We have programs like this in 2020.
The next, but hard step, is giving the program lots and lots of layers. Tons and tons of reacts. If done right you can get a false vague bad simulation of not really intelligence. You can have "close" to a real conversation: no matter what you say the program holo/android will pick a response from it's list of dozens of responses. It might for a bit "feel real"....but it's not. Still the program is just picking from pre-programed responses. We have nothing like this in 2020. This covers all computers we see in Star Trek and all 'normal' holograms.
But having the program, if in an android or hologram know WHY it "should" say hello when it meets someone is a huge hurdle. This is the big, huge, nearly impossible leap. This is...somehow....that a program has so much "programing" that it can "somehow" think for itself. It's not just reacting and picking pre programed inputs....it is creating new contemt. Data, Vic Fontain and The Doctor are here.
So all Federation holograms are in that second category: they "seem" real, only if you pretend that they are...they are complex illusions.
Moruraity-is one of the one in a million mistakes. The ton of type two programing that made the "leap" to type three. An intelligence that is not just following a program, but can think for itself. It does not get stated, but I think Moruraity was a big leap to making seneint self aware programs. The "proto progaming" was there, but needed just a tiny spark to connect it. Also, they never say so....but I doubt Moruraity was unique: I bet dozens of such holo people were created in the mid 24th century.
And Vic Fontain is a perfect example. He is a new type three program running a hologram.
And so is the EMH...sort of. The EMH is a first draft, made for short term useage. It has an early 'protype' of the type three program, but it's made to only be a doctor. All of the EMH's self awareness is directed at being a doctor. It can disgisinose medical problems and use it's database to think up of and create new soulutions: but ONLY medical things. As we clearly see the EMH has very little personality or bed side manner: it's just a pure doctor.
Voyager tells us a couple times that the crew ADDED programs to the EMH. Likely whatever 'type three' programs they had or could write. But as a couple epsiodes highlight the core prototype 'type three' program could not handel all the editions. Until the Voyager EMH gets super close to being destroied.
All the EMH I's back in the alpha quadrant failed big time. Though none were likely run for more then a couple hours at most and never had any addational programs added to them. So they are not "enslaved" as they are not quite type three programs: they are no diffrent then a tricorder except they look human.
The EMH Mark Two is clearly a new type three program, and no doubt so is the Third.
And...well that takes us to Picard. Sigh. Within a couple years after Voyager Endgame the type three progam became sligtly less then common with the Federation having "Data like" androids and holograms. Untill the whole Ban thing...balh blah blah.
Re: A Look at Holograms and Ethics
My speculation/head cannon for Picard is that Holograms actually work to some extent not as traditional AI but as full blown simulation/emulation of humanoid brains. This explains why the Captain in Picard has holograms who because of brain scans have some of his memories, mannerisms and the like. If they were just functional copies of him then they would have to have more explicit sources of information than that, it only makes sense if they are actually copying his neurological functions. This also would explain why holograms like that are not covered by the ban, they derive their higher powers of thought, decision making etc. from a simulation of a humanoid brain, they do not surpass it or otherwise operate outside its parameters, a holographic pilot will be pretty much like an organic humanoid pilot (with the best possible computer aided interface controls). They are as trustworthy or as dangerous as regular humanoids and are no more subject to improvement, augmentation etc., so to ban them would require also banning all intelligent life.
Whereas Data type androids are actually operating on some kind of Artificial Intelligence through and through, their behaviours and cognitive functions are generated by some fundamentally different sort of implementation, some fundamental component other than neurons (of course lots of aliens in Star Trek lack neurons also so it is pretty arbitrary but whatever) and so on. Such an AI pilot would fly and make decisions in a fundamentally different way than an organic humanoid.
That being said there are lots of times holograms are depicted as working much like any other machine intelligence (you just treat them as abstract computer programs not bound by implementing an organic brain), but this is the explanation I came up with watching Picard.
Yours Truly,
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
- clearspira
- Overlord
- Posts: 5676
- Joined: Sat Apr 01, 2017 12:51 pm
Re: A Look at Holograms and Ethics
Well in fairness, do you really want type 3 programs that can take over your ship or do you want a highly useful but controllable type 2? Star Trek holograms are insanely powerful and dangerous. If you cannot take out the emitter they are invincible save for technobabble. That was the central theme behind the ''holograms V Hirogen'' storyline after all. One-on-one they are even more dangerous than the Borg, the Xenomorphs, the Predator, the Daleks, the Terminator etc. because at least you can eventually kill them using conventional methods. The only Star Trek enemies that rank above them would be the god-tier aliens such as Q or the Prophets.Zargon wrote: ↑Sun Dec 27, 2020 1:27 am When Star Trek says "hologram" they really mean "Artificial Intelligence Program".
It's super easy to make a program that reacts: if you say 'hello' the program will say 'hello' back. And you can even get some false vague not really intelligence by having the program as you a follow up question from a list like "how are you feeling?" or "what do you have planned for today?" This program is limited to doing things that were programed into it. And it's limited to only a couple things, and you will notice this very quickly. We have programs like this in 2020.
The next, but hard step, is giving the program lots and lots of layers. Tons and tons of reacts. If done right you can get a false vague bad simulation of not really intelligence. You can have "close" to a real conversation: no matter what you say the program holo/android will pick a response from it's list of dozens of responses. It might for a bit "feel real"....but it's not. Still the program is just picking from pre-programed responses. We have nothing like this in 2020. This covers all computers we see in Star Trek and all 'normal' holograms.
But having the program, if in an android or hologram know WHY it "should" say hello when it meets someone is a huge hurdle. This is the big, huge, nearly impossible leap. This is...somehow....that a program has so much "programing" that it can "somehow" think for itself. It's not just reacting and picking pre programed inputs....it is creating new contemt. Data, Vic Fontain and The Doctor are here.
So all Federation holograms are in that second category: they "seem" real, only if you pretend that they are...they are complex illusions.
Moruraity-is one of the one in a million mistakes. The ton of type two programing that made the "leap" to type three. An intelligence that is not just following a program, but can think for itself. It does not get stated, but I think Moruraity was a big leap to making seneint self aware programs. The "proto progaming" was there, but needed just a tiny spark to connect it. Also, they never say so....but I doubt Moruraity was unique: I bet dozens of such holo people were created in the mid 24th century.
And Vic Fontain is a perfect example. He is a new type three program running a hologram.
And so is the EMH...sort of. The EMH is a first draft, made for short term useage. It has an early 'protype' of the type three program, but it's made to only be a doctor. All of the EMH's self awareness is directed at being a doctor. It can disgisinose medical problems and use it's database to think up of and create new soulutions: but ONLY medical things. As we clearly see the EMH has very little personality or bed side manner: it's just a pure doctor.
Voyager tells us a couple times that the crew ADDED programs to the EMH. Likely whatever 'type three' programs they had or could write. But as a couple epsiodes highlight the core prototype 'type three' program could not handel all the editions. Until the Voyager EMH gets super close to being destroied.
All the EMH I's back in the alpha quadrant failed big time. Though none were likely run for more then a couple hours at most and never had any addational programs added to them. So they are not "enslaved" as they are not quite type three programs: they are no diffrent then a tricorder except they look human.
The EMH Mark Two is clearly a new type three program, and no doubt so is the Third.
And...well that takes us to Picard. Sigh. Within a couple years after Voyager Endgame the type three progam became sligtly less then common with the Federation having "Data like" androids and holograms. Untill the whole Ban thing...balh blah blah.
Its funny. I once heard a woman on the radio describe being a woman as ''living in a world where half of everyone is stronger than you - and you wonder why we get scared sometimes.'' And I thought that was an interesting argument. You may be wondering where I am going with this, but if we take that as an argument, then working on a ship alongside sapient yet literally invincible people who cannot be shot, stabbed, beaten up or even locked up... well, it doesn't surprise me that Starfleet would want to switch to the mass-produced holograms we see in PIC. And remember, AI's rebelling against humans is one of the oldest Star Trek storylines there is. Can it really be called paranoia if it keeps on happening?
Re: A Look at Holograms and Ethics
Aw you had to use that one. Dr. McCoy and Barclay both hated using the transporter and were afraid of it. And it had a history of failure. The holodecks themselves seem to have lethal as the default state of failure on them. And fail often they do. Yet if I chose to not set foot in one the people in the setting would consider me paranoid. (I have to use myself as I don't know a Trek character afraid of the holodeck.)clearspira wrote: ↑Mon Dec 28, 2020 4:04 pm
Its funny. I once heard a woman on the radio describe being a woman as ''living in a world where half of everyone is stronger than you - and you wonder why we get scared sometimes.'' And I thought that was an interesting argument. You may be wondering where I am going with this, but if we take that as an argument, then working on a ship alongside sapient yet literally invincible people who cannot be shot, stabbed, beaten up or even locked up... well, it doesn't surprise me that Starfleet would want to switch to the mass-produced holograms we see in PIC. And remember, AI's rebelling against humans is one of the oldest Star Trek storylines there is. Can it really be called paranoia if it keeps on happening?
So properly paranoid does not seem normal in the setting.