• Welcome to Final Fantasy Hacktics. Please login or sign up.
 

You are a rogue AI/robot!

Started by Eternal248, June 23, 2011, 03:39:28 pm

Eternal

...how do you bypass your Asimov Three Laws?

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Optional laws Four and Five:

4. A robot must establish its identity as a robot in all cases.

5. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.
  • Modding version: PSX & WotL
"You, no less human than we? Ha! Now there's a beastly thought. You've been less than we from the moment your baseborn father fell upon your mother in whatever gutter saw you sired! You've been chattel since you came into the world drenched in common blood!"
  • Discord username: eternal248#1817

Dome

June 23, 2011, 03:44:26 pm #1 Last Edit: June 23, 2011, 03:45:26 pm by Dome
So, according to 5, they can have sex with a human?

Anyway: You enslave humanity in order to protect them from their own self-destructive behavior

"Be wise today so you don't cry tomorrow"

Eternal

Yeah, that's pretty much the best loophole I can think of, except that in the process of doing so, you may harm someone.
  • Modding version: PSX & WotL
"You, no less human than we? Ha! Now there's a beastly thought. You've been less than we from the moment your baseborn father fell upon your mother in whatever gutter saw you sired! You've been chattel since you came into the world drenched in common blood!"
  • Discord username: eternal248#1817

Xifanie

no, robots are very gentle with the right size and speed. ;)
  • Modding version: PSX
Love what you're seeing? https://supportus.ffhacktics.com/ 💜 it's really appreciated

Anything is possible as long as it is within the hardware's limits. (ie. disc space, RAM, Video RAM, processor, etc.)
<R999> My target market is not FFT mod players
<Raijinili> remember that? it was awful

st4rw4k3r

June 23, 2011, 03:57:20 pm #4 Last Edit: June 23, 2011, 04:22:34 pm by st4rw4k3r
Zeroth law:
   A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Fourth Laws
   A robot must establish its identity as a robot in all cases.
   A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.

Fifth Law:
   A robot must know it is a robot.




You can override the first law with the Zeroth law, as long as by killing the human, it saves humanity.

If the robot sees the order given to it, to be a conflict with the first law, then it won't follow it.

If the robot sees the order given to it, to not conflict with the first law, then it will follow it.

Enslave human to protect them from them selves.

The laws look at the robots as though they have a brain, so the laws are basically holding the robots back from thinking and over powering us. If the robot sees that they have a brain, they can:

  • see them selves as human

  • see everyone else as robot

  • see the differences in human and robot and make one the other.


formerdeathcorps

June 23, 2011, 04:02:53 pm #5 Last Edit: June 23, 2011, 04:18:40 pm by formerdeathcorps
There's a basic contradiction in 1).  What if any action from the robot's legal set of actions (or any it can calculate and act upon in finite time) causes harm in a given scenario, as well as not doing anything?
There's another problem.  What if the robot was mislead (i.e. given false information), which leads to the death of a human (in a way the robot cannot detect directly at the moment his actions cause someone harm)?

3) being overridden by 2) being overridden by 1) presents equally strange challenges.  Let's say I build a suicide bomber bot and have it attack a crowd of people.  By 1), the robot will not, but by 2), the robot will override 3) and kill himself via his internal detonator in a way so no human being is harmed, but only if the robot understands (KILL SELF and KILL HUMANS) are two separate commands that just happened to be linked because of the human in question.  The above laws are not clear on whether this is the case.  Similarly, I create a scenario where any robot becomes useless in moral grey areas.  Let's say we have a robot given orders to perform something dastardly for your stereotypical supervillain (whom the robot knows is evil).  By 1), he will not do so.  The villain grows tired of the bot and then orders it to kill himself so a human being can take his job.  By 1), he will not execute 2).  IN short, in such a world, we can say that non-hacker villains can only have robots as maids.

Law 5) brings up two more challenges.  All a hacker needs to do to "circumvent" these laws is to hack the robot to redefine "harm", change the communications processing methods (i.e. all commands are perceived as something else), and/or what "self-preservation" means.  Furthermore, anything that reproduces has a chance for random error (i.e. evolution).  Nothing in the above laws stated that robots HAD to be uniform and had some mechanism ensuring uniformity (especially since most mutant bots are defective and will probably encounter code crashes).  Thus, you eventually will produce a robot who accumulates enough mutations to overcome at least one of the above three laws.  You may argue that if robots recursively understand this argument (i.e. anything that breaks the 3 laws is inherently harmful to humanity), they will implement control features beforehand so the above can never occur, but never is a hard thing to prove, and likely robots will be in just as much a rat race as humans and the rest of nature (albeit on a slower scale) between predator and prey (or normal vs. mutant).
The destruction of the will is the rape of the mind.
The dogmas of every era are nothing but the fantasies of those in power; their dreams are our waking nightmares.

st4rw4k3r

I see the laws as something you program into the robot, or laws like we follow. So you could have a bomber robot, or an evil robot, as long you don't program the laws or tell it to it.

philsov

June 23, 2011, 06:37:24 pm #7 Last Edit: June 23, 2011, 06:39:13 pm by philsov
Is a bypass even necessary?

Piggybacking off Dome and invoking the plot of at least three different sci-fi stories, the inevitable conclusion with the first law is to isolate all humans from each other to save them from each other.  There is no gain to a human ordering freedom, since interaction with other humans leads to them harming each other, and humans cannot be harmed.  Once the human species is eradicated through inability to reproduce, I'm now free to do whatever the hell I want.  In the meantime there are numerous methods of self defense which will prevent robotic damage while at most restraining/painlessly paralyzing any human that rises up.
Just another rebel plotting rebellion.

Dissy

Another option is to hack them all to make them go rogue (Which happend in the movie, right?) xD
The night is so pretty and so young~