A Look at Holograms and Ethics
- Madner Kami
- Captain
- Posts: 4050
- Joined: Sun Mar 05, 2017 2:35 pm
Re: A Look at Holograms and Ethics
People regard a lot of dangerous things as completely harmless, despite being total nightmare fuel if you actually look at it a bit too close. Traffic victims for example. The likelyhood of dying due to a car-crash or someone running you over isn't negligable and yet, nobody behaves as if they could die any moment, when being on the streets. There are many more such examples.
"If you get shot up by an A6M Reisen and your plane splits into pieces - does that mean it's divided by Zero?
- xoxSAUERKRAUTxox
- xoxSAUERKRAUTxox
Re: A Look at Holograms and Ethics
This has been part of Star Trek from the OS. The Constitutions Enterprise computer is super dumb. The Enterprise D, a generation later, is just as dumb. As is nearly every human made computer.clearspira wrote: ↑Mon Dec 28, 2020 4:04 pm Well in fairness, do you really want type 3 programs that can take over your ship or do you want a highly useful but controllable type 2?
We even see a super computer 'near type 3' in the OS, the M5. But after that fails there is no more computer AI advancement for generations. Data is a rare unique origin
It has to be done by the Federation, maybe something like a law against AI. Because we never see it anywhere. And we just about never even see robots....unless you count all the "probes".
Would a human be scared to work with a bunch of robots or holograms? Maybe, but it's really no different them working with humans.
Re: A Look at Holograms and Ethics
Exactly!
Any human could just turn a phaser on full blast and annihilate swaths of their coworkers with no warning or they could detonate a phaser next to the Warp core and instantly destroy the ship and everyone on it and no one could do anything and so on. And don't tell me these things don't happen, that guy in Into Darkness who Kahn got to blow up Star Fleet intelligence for example and no doubt many other crazies I am forgetting. Any human or humanoid could in principle use things like internal force shields to render themselves invulnerable to harm from their fellows, strangely they rarely do so.
Most warfare in Star Trek is ship to ship and it does not matter much tactically or strategically if a ship is crewed by humanoids, AIs or a holograms, blowing up the ship is usually sufficient to at least eliminate the threat and if you can't penetrate the ships shields etc. anyone on that ship is invulnerable to harm from you. So the advantage of AIs and holograms because of their personal physical structure are limited. Holographic projections are not affected by harm the same way solid things (including say robots or androids) are, but the projectors/emitters are often way more vulnerable, if they are stuck in place then they can't dodge attacks and the Doctor's mobile emitter was forever being damaged, as I recall it got taken out by an attack from a Macrovirus that was basically just stabbing. Holograms could easily be more vulnerable to certain kinds of physical attacks than humanoids, for example if someone forgot to waterproof the emitter you can take one out by just throwing a cup of water at the emitter (hopefully the hologram takes its final moments to say "I'm melting! I'm melting! What a world!").
Androids and robots are all over the map in terms of how durable they have been constructed although few seem immune to the effects of a phaser on maximum. Likewise some humanoids are much tougher than others and I doubt any human would be able to do much bare handed against an alien like the Horta (silicon based life form from The Devil in the Dark). A Horta is probably about as durable as any robot or android we have seen on Star Trek...
Yours Truly,
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Allan Olley
"It is with philosophy as with religion : men marvel at the absurdity of other people's tenets, while exactly parallel absurdities remain in their own." John Stuart Mill
Re: A Look at Holograms and Ethics
Or jumping out of a perfectly good airplane.Madner Kami wrote: ↑Mon Dec 28, 2020 10:12 pm People regard a lot of dangerous things as completely harmless, despite being total nightmare fuel if you actually look at it a bit too close. Traffic victims for example. The likelyhood of dying due to a car-crash or someone running you over isn't negligable and yet, nobody behaves as if they could die any moment, when being on the streets. There are many more such examples.
Point.
Re: A Look at Holograms and Ethics
It's not negligable but it's not high enough to be something to be rationally worried about, particularly if you don't drive like an idiot. Considering some of the more absurd responses I'd say that there's a bigger problem with people getting overly worked up about the risk (compulsory system in all new cars to automatically call emergency services with your location in an accident? Oh please - have that if you want but compulsory is very definitely paraonoid nanny state technology obsessed nonsense and not the sort of world I want to live in - I would much, much rather have the risk than have to have a bloody safety rope for ordinary day to day tasks).Madner Kami wrote: ↑Mon Dec 28, 2020 10:12 pm People regard a lot of dangerous things as completely harmless, despite being total nightmare fuel if you actually look at it a bit too close. Traffic victims for example. The likelyhood of dying due to a car-crash or someone running you over isn't negligable and yet, nobody behaves as if they could die any moment, when being on the streets. There are many more such examples.
When it comes to robots or holograms or whatever you want (I've never seen a reason to differentiate them despite Trek making holograms out as a special case) the risks are the impact on society rather than them rising up and wiping out their human overlords, or, as this whole thing is about, the ethical considerations if you make them self-aware.
Re: A Look at Holograms and Ethics
A few things on compulsory safety. Seat belts. Studies show they save lives. They are standard features in cars. But they had to make laws to get people to actually wear them. Same with drop lanyards for highlift work vehicles. (Think forklift where you go up with the forks.) And of course airbags in cars. On one hand if you choose not to use said device I would not care. Your risk after all. But how many lawsuits come down on the driver for not making his passenger buckle up? On the manufacturer for not making these devices have to be used or the machine won't work?Riedquat wrote: ↑Tue Dec 29, 2020 11:40 amIt's not negligable but it's not high enough to be something to be rationally worried about, particularly if you don't drive like an idiot. Considering some of the more absurd responses I'd say that there's a bigger problem with people getting overly worked up about the risk (compulsory system in all new cars to automatically call emergency services with your location in an accident? Oh please - have that if you want but compulsory is very definitely paraonoid nanny state technology obsessed nonsense and not the sort of world I want to live in - I would much, much rather have the risk than have to have a bloody safety rope for ordinary day to day tasks).Madner Kami wrote: ↑Mon Dec 28, 2020 10:12 pm People regard a lot of dangerous things as completely harmless, despite being total nightmare fuel if you actually look at it a bit too close. Traffic victims for example. The likelyhood of dying due to a car-crash or someone running you over isn't negligable and yet, nobody behaves as if they could die any moment, when being on the streets. There are many more such examples.
When it comes to robots or holograms or whatever you want (I've never seen a reason to differentiate them despite Trek making holograms out as a special case) the risks are the impact on society rather than them rising up and wiping out their human overlords, or, as this whole thing is about, the ethical considerations if you make them self-aware.
And then the flip side. Many safety items could be ignored by companies as an 'unnecessary expense'. I am pretty sure I want the guard to keep my hand out of a moving machine with a cutoff so if the guard is lifted the machine shuts off. Because the same company won't care when your arm is turned to hamburger. "Oh well you can't do the job anymore. Don't let the door hit you on your way out. Next!" There are crap companies with these kinds of compulsory rules. Imagine how much worse it would be without them?
So sadly I lean toward compulsory safety.
Re: A Look at Holograms and Ethics
I lean towards some level of it. It's not a black and white, all or nothing thing. But I would certainly prefer to take my chances in a world where we're less nannied. That certainly doesn't mean I want to go back to Victorian working conditions. And there are still areas that could certainly do with being safer than they are. But we're also turning in to a society that seems to live in fear of the remote chance, can't cope without the safety line comfort blanket for many simple, ordinary tasks, and yet refuses to have much self-responsibility. The balance isn't right (and going overboard in places has also had a rather clear "boy who cried wolf" effect, leaving some thing not being regarded with the seriousness that they deserve).
All the stuff that's got to be in new cars now I really feel has gone past the level of reasonable precautions into the territory of the absurd. It also hits my intense hatred of treating everyone like unruly children, and my equally strong dislike of piling more technology in to things that worked fine without it. It puts me off buying a new car by a country mile. Which is a pity, because an electric car would be ideal for my usage.
The future of humanity is looking more and more like the humans in Wall-E.
oh - that's a perspective from the UK, it'll differ from place to place obviously.
All the stuff that's got to be in new cars now I really feel has gone past the level of reasonable precautions into the territory of the absurd. It also hits my intense hatred of treating everyone like unruly children, and my equally strong dislike of piling more technology in to things that worked fine without it. It puts me off buying a new car by a country mile. Which is a pity, because an electric car would be ideal for my usage.
The future of humanity is looking more and more like the humans in Wall-E.
oh - that's a perspective from the UK, it'll differ from place to place obviously.
Re: A Look at Holograms and Ethics
The advantage of a humanoid over a hologram is that humans aren't programable. We saw that without his ethical subroutines the doctor can be the ultimate monster. These can be turned off easily. There is no way to know if you're talking to data or to lore. They have no fear of death. They don't need you and needing each other is important for relationships to exist.
Re: A Look at Holograms and Ethics
There are more than enough humans you could say the same about.drewder wrote: ↑Tue Dec 29, 2020 3:22 pm The advantage of a humanoid over a hologram is that humans aren't programable. We saw that without his ethical subroutines the doctor can be the ultimate monster. These can be turned off easily. There is no way to know if you're talking to data or to lore. They have no fear of death. They don't need you and needing each other is important for relationships to exist.
Whether or not a sufficiently advanced machine would fear death or not is an open question. If it's potentially going to be something that'll threaten us (as opposed to a dumb one following some fairly simple routines, which could still be a big threat) then it would have to have motivation to do so. And if it's got that motivation it'll probably have its own self-preservation as one too. Any that didn't would probably be fairly easy to defeat.
Re: A Look at Holograms and Ethics
What Star Trek writers don't seem to understand is what we call the hologram is the equivalent of a computer monitor. It's an essentially dumb user interface. The "brains" are in the ship's computer's applications and databases. I cringe whenever I hear the Voyager Doctor refer to his "program", like it's one single mass of code. That's not how app development works. Even fairly simple web interfaces use a three tier structure of web server, app server, and database. An ecommerce site can have hundreds of separate apis running in their own memory spaces along with additional applications that present the user interface and make it all look like single web site to the customer.
I hate the mobile emitter because at best it should be a remote user interface. It should stop working the moment communications with Voyager are lost because contact with the computer is lost. They'd have us believe it downloads everything that makes up the Doctor into it and somehow erases it from the computer in the process. Why would that be? It's a copy.
There's literally no reason to use the EMH's as labor because they could be shut off and erased if they had no value. There are certainly bound to be better solutions for mining in a civilization with transporters, phasers, and replicators.
I hate the mobile emitter because at best it should be a remote user interface. It should stop working the moment communications with Voyager are lost because contact with the computer is lost. They'd have us believe it downloads everything that makes up the Doctor into it and somehow erases it from the computer in the process. Why would that be? It's a copy.
There's literally no reason to use the EMH's as labor because they could be shut off and erased if they had no value. There are certainly bound to be better solutions for mining in a civilization with transporters, phasers, and replicators.