Can golems reason?

A stone golem has been set to investigate any loud sounds in its level of a megadungeon and to kill any intruders. It can hear intruders on the far side of a door it's too small to get through.
Just how big is the door? Doors generally have plenty of extra room for passing through even if one has to crawl through sideways. Also mundane stone only has a hardness of 8. :devil:

Full Attack: 2 slams +18 melee (2d10+9)
 

log in or register to remove this ad

My sense is to look at the problem in real life, and see how much applies to the Golem.

I have a problem with that in that modern AI is like nothing in a fantasy world.

Translated to real life, the golem has voice recognition and object recognition. You can tell it to carry something and bring it to you, and it can recognize your words and identify the object you are referring to, and then execute actions to complete the instruction.

In real life, the ability to understand human language seems to imply a solution to the hard AI problem; the ability to understand and follow commands in an arbitrary form implies near-human intelligence. INT 3, in D&D terms, which the golem doesn't have.
 

I have a problem with that in that modern AI is like nothing in a fantasy world.



In real life, the ability to understand human language seems to imply a solution to the hard AI problem; the ability to understand and follow commands in an arbitrary form implies near-human intelligence. INT 3, in D&D terms, which the golem doesn't have.

Video game AI is not the same as true AI.

What magic in D&D enables in a golem is greater functionality than what we even have in robotic technology today.

Socom on my PS2 had voice recognition. i could order my men to defend, follow me, etc. My Motorola Razr had voice recognition (worked pretty good).

Neither of those devices have any intelligence, that is, the ability to dynamically solve a situational problem. My dog can crawl out from under a blanket, if my PS2 had legs, it could not.

It doesn't take intelligence for a patrol path for an enemy MOB. It's a simple script defining their route, and on each loop, check for LOS to the player or on initiation of a gun shot in range and open doorway path, move to intercept and attack. That is the gist of all video game AI.

Magic enables the same effect, but better and more easily (the player is not required to actually handle any of that complexity).

The point is, magic hand-waves that complexity of what's going on. I'm saying that primitive technologies demonstrate the same principles, without actually requiring the entity to be sentient or intelligent.

Case in point, one robot idea a co-worker and I had, was to build a robot with a good GPS in it. Put the robot into the real world environment that we've also mapped out into the game space. Every physical movement the robot makes is updated as the robot's location in the game space.

So mentally, the robot is playing a video game with its physical self mirroring the same action.

So when the robot needs to move 10' north in the game space, that command is translated into a physical movement command.

If every entity had these GPS trackers, then every entity could be rendered in the game space, and the robot could actually track and attack them.

Now, actual GPS's don't work to the accuracy needed, but assume a warehouse test maze, with some beacon system so we can get the x,y position of all entities. That part is quite feasible, and basic video game AI logic can handle it.

The trickier part is object recognition. getting the AI to recognize a carrot on the ground as compared to you wandering around the maze without a GPS tracker. We need it to ignore the carrot, but pay attention to you.

That's trickier, but then, Kinect gives us that solution. It has the functionality to identify humans, so we can use that with its camera to flag that it sees a human (let's assume we hate humans and the goal is, kill all humans in the maze). So, we can hook up API calls so when the camera flags a "human detected" event, we switch from patrol mode (circumnavigating the maze) to attack mode (keep the human in the cross hairs and shoot him until he's flat and does not move for 5 minutes).

Once again, none of this takes neural networks, machine learning. It's just simple logic, once you've flagged objects of interest in the game space.

So casting the Make Golem spell handles all this programming for you, and probably a few more bells and whistles than a basic video game enemy script.

None of it implies any reasoning ability on the part of the golem. Because the same principles can be applied in a video game where there is also non-thinking occurring in the computer.
 

Since it has an Int of 0, I would say that it can't attack or attempt to follow the characters shortly after they move beyond its range of perception. It might follow the characters around a corner shortly after they turned it, but if they took another turn and then stop and make no noise, the golem would be unsure of what to do and would 'reset' its own position. If its a guard, as golems are in most cases, it would be programmed this way in order to avoid getting too far from its area or object it is meant to defend.

Although I like to think that when it settles back in and goes passive once more it dreams of magical sheep.
 

I generally agree with Janx's principles and think he makes a sound argument, and if you want to draw the line there, then that's fine.

I just don't happen to agree with the line. I think the problem is that Janx's golems are now potentially intelligent enough that they are shading into Int 1 space, in as much, as that I would be hard pressed to define under his guidelines much difference between a Int 0 and Int 1 creature and even harder pressed to convey any difference between the two in play.

My in game breakdown works something like this:

0: Golems and similar automatons, such as lesser undead. Single celled life. Possibly simple worms, non-predatory insects, and other living creatures with so simple of a nervous system as to be effectively mindless.
1: Ants, most invertebrates, fish, amphibians and reptiles
2: Hunting spiders, most mammalian herbivores, most birds, monitor lizards, squids, some sharks
3: Cats, elephants, most mammalian carnivores and omnivores, cetaceans, pigs, crows, ravens, octopi, possibly pigeons
4: Dogs, dolphins, monkeys, some parrots
5: Apes
6: Roughly minimum functional human intelligence (below this, probably unable to take care of themselves)
7: Minimally functional professional intelligence (below this, probably unable to hold any job)

Picking a new path when faced with an obstacles is fairly complicated behavior. It doesn't have to involve sophisticated reasoning and planning, but it often does. Quite a few of the Int 1 species would fail such a test. Eventually, as the Golem's package of algorithms increases is breadth and effectiveness its passes a threshold were you have to accept that however mechanistic it may be, its doing something indistinguishable from thinking. If it's intelligence allows it to plan more effectively than an Int 1 creature and come up with more effective behavior in reponse to obstacles than an Int 1 creature, then its smarter than an Int 1 creature. There is no reason to suspect that higher intelligences are anything more that a broader, more adaptable and more efficient package of algorithms.
 

Picking a new path when faced with an obstacles is fairly complicated behavior.

Ants and most other social insects manage it.

Hell- if I place an obstacle in front of a non-social insect, it will change it's path. If what I blocked off with my obstacle was food, it may well try to continue in that general direction instead of simply timid ing around.
 

Ants and most other social insects manage it.

Hell- if I place an obstacle in front of a non-social insect, it will change it's path. If what I blocked off with my obstacle was food, it may well try to continue in that general direction instead of simply timid ing around.

Its actually quite amazing how much behavior is packed into a brain. We assume our human brains are the pinacle of the technology. Then go back and consider the traits and problem solving going on in an animal brain of considerably smaller size.

The gist is, it doesn't take a lot of brains to be able to navigate, eat and fight, let alone exhibit individuality.

In the comparison model of Golem = robot frame driven by a video game AI (not a Turing AI), that's just me equating that there is basic off the shelf technology to build a robot that meets some of the functional behaviors of a D&D golem AND that technically it is not thinking (thus is INT 0).

If your not a programmer, trust me when I say, the enemies in Halo are not thinking. They are executing a series of situational scripts and basic logic. One could argue the programmer is thinking on their behalf, but that's meta-living.

The key test is, if the game world allows for it (like the Forge mode), you can easily setup a situation that any thinking creature could solve, that the video game creature can not.

Because it is only scripted to have a non-combat "first siting" behavior, a standard get into LOS, face enemy and shoot behavior, and some random behaviors to make it appear alive (like running away retreating, or grunting at you, or moving to a new position if it keeps missing).

I actually posit that a D&D golem still exhibits more advanced behavior than a video game AI (object recognition, and truer animated object recognition with the ability to dynamically interact, as opposed to scripted interactions).

If nothing else, I am providing supporting arguments that when somebody questions if the golem can actually patrol an area and attack an intruder and still have INT 0, the answer is yes.
 

Ants and most other social insects manage it.

Yes, but I would argue that an ant (or a bee) is smarter than a golem. If you want golems to exhibit ant or bee level memory, planning, and problem solving, then I feel you need to give up on the idea that they are mindless. And I'm ok with that. I'm not above applying an Awakened template to a golem and giving it some intelligence. I'm even ok with, "In my campaign, all golems have some basic intelligence." I'm just saying that that isn't default, and that following the default and being coherent implies giving golems behaviors that are obviously more limited than that of an ant.

Hell- if I place an obstacle in front of a non-social insect, it will change it's path.

Sure, but its not at all clear that that path change is particularly goal driven. Instead, it often appears that the insect simply is running an algorithm that says, "In the absense of a strong chemical trigger, pick a direction and keep moving in it." You wouldn't expect a beetle to be able to run a maze, although I'd expect its path finding algorithm (keep pushing on an obstacle and sliding until I find an opening) approximates putting its body on one wall and following it. Nonetheless, that's the reason you find flies dead at the bottom of a window pane. Now sure, if the beetle has some sensory cue - usually scent - its combination of "keep pushing on an obstacle and sliding until I find an opening" and "go in the general direction of the chemical cue" is going to be pretty effective at finding a path. I would imagine the golem with a similar means of getting to its goal, but I would not imagine the golem to have the equilevant of a planning algorithm that allowed it to reason spatially, plan a path, and stick to its goals in the absense of continuing sensory input. This is especially true because Janx is right about the sophistication level of many of the Golem's subruetines; they are clearly much more advanced than many things that we wouldn't consider completely mindless. Therefore, if the golem isn't in fact intelligent, it must be the case that in other areas it is at least as limited if not more so than say an earthworm.

My disagreement with Janx isn't over the methodology of his reasoning, but ultimately comes down to the question of 'what is thinking'. Janx argues that, because he knows the scripted AI's of a modern first person shooter inside a game system aren't thinking that equally complex behavior in the real world isn't thinking either. But at some point, I adopt a more Turing perspective on thought. That is to say, at some point anything that demonstrates sufficiently appropriate behavior is thinking because we don't actually know what thinking is (for all we know our minds are just more sophisticated scripting engines) and the behavior is indistinguishable from thing. In other words, if its indistinguishable in behavior from an ant, then its as smart as an ant and hense thinking to the same degree.
 

A stone golem can reason exactly as far as the DM wants it to.

If the DM wants it to relentlessly pursue the characters, then it has the ability to relentlessly pursue the characters. If the DM wants it to be so dumb it cant figure out that the ballista bolts hitting it are coming from the ballista in the doorway, than it cant figure out that the bolts are coming from the ballista in the doorway.

The question isnt 'does the golem have the ability to reason' it is 'how do I, the DM, want the party to interact with the golem'. If I want it to be a straight up fight, then the golem comes around the corner and spots the part, or something along those lines. If I want the encounter to be a stealthy skill challenge, then it patrolls around, and the price of failure of the challenge is a fight. If all I want is to let the party know that the bad guy has a golem, then they sneak around it easy.

Other than that, this isnt an answerable question. There will be at least one different opinion per post to this thread, and very little chance that a consensus will be formed. That isnt a bad thing. If every DM thought the same, we would be computers.
 

My disagreement with Janx isn't over the methodology of his reasoning, but ultimately comes down to the question of 'what is thinking'. Janx argues that, because he knows the scripted AI's of a modern first person shooter inside a game system aren't thinking that equally complex behavior in the real world isn't thinking either. But at some point, I adopt a more Turing perspective on thought. That is to say, at some point anything that demonstrates sufficiently appropriate behavior is thinking because we don't actually know what thinking is (for all we know our minds are just more sophisticated scripting engines) and the behavior is indistinguishable from thing. In other words, if its indistinguishable in behavior from an ant, then its as smart as an ant and hense thinking to the same degree.

Thats a fair argument.

I think your beetle example and my video game example are a electronic and biological instance of the same concept. Simple algorthms that are effective. Those algorthms are especially effective within the environmental scope they were designed for.

Halo bad guys work great as bad guys because they shoot at me and challenge me for as long as needed to kill them. However, not once have I been able to offer them surrender, or toss them a sandwich during a ceasefire at Christmas.

I would posit that there's a difference in what is Thinking vs. Executing a simple algorthm. Even though Thinking may be built upon the Executing of simple algorthms, in fact, it depends on it.

The beetle will execute its same set of algorthms in every environment you put it into.

There's something intrinsically different when you put a higher level entity into the test environment. it will try a variety of solutions, and may come up with a new behavior.

I suspect, the ability to create a new behavior is by Thinking. Even though that new behavior may be built on chaining old behaviors together or a process of elimination.

The game or golem will simply go through its list of behaviors until one "works". When put in the same situation, it will go through the same list again. it won't recognize the situation as a recurrance and jump to the new behavior first.

That's what I suspect Thinking is (close enough for these purposes). The extra ability to alter and add to our behaviors and recognize past situations (situational memory?).

I'm pretty sure golems don't have this trait, just as video game AI doesn't have this trait (not entirely true, some game programmers have been doing some advanced work in this field, but all your basic games are dumb).
 

Remove ads

Top