Making Enemies Less Dumb
It's been a while, hasn't it?
I'm still working on the next update to Golemancer. I haven't posted much about it because I'm not doing anything very interesting. Unless you're an options menu enthusiast and you think options menus are really interesting... then here you go!
That changes today, though, because I'm putting the boring UI stuff on the back burner. Instead, I'm going to finally fix something that's been a big problem ever since v0.1. You've probably noticed, but...
Enemies Are Really Stupid
Like the map generation system, which I looked over in a previous devlog, the enemy AI was hastily thrown together with duct tape and string so that I could get the first version out. Before I get into improving it, here's an overview of how the current code works.
To help them decide what to do, each enemy has an AI Type, which decides what it actually wants to do with its turn. The AI types are:
- Aggressive: The most common type by far. Aggressive enemies try to get closer to golems (by taxicab distance) and use offensive skills when near.
- Cowardly: Used by "support" enemies, like the Skull Lantern and Ghoul Raiser. Cowardly enemies try to move away from golems and use offensive skills only if they happen to end up next to a golem anyway.
- Rammer: Used by the X-Cavator boss. It checks all four directions for Ram targets and chooses which direction to go based on that. It prefers to hit golems, its second priority is inanimate objects (e.g. boulders), and it avoids hitting its teammates.
- Blaster: Used by the Gem Lady boss. Similar to Rammer, but it counts its targets based on units in the same row and column (i.e. the ones that will be hit by Cross Beam.) It may decide not to use Cross Beam if it will hurt its own teammates more than its opponents.
- Follower: Used by the Gemlings that appear in the Gem Lady boss fight. Followers search for a boss enemy and designate them as its leader. They attempt to get closer to their leader (via actual Cartesian distance, not taxicab distance), but they avoid standing on one of the four tiles surrounding the leader. This is to help them avoid getting hit by the Gem Lady's Cross Beam. If the leader dies, Followers behave like Cowardly enemies.
The AI will also keep track of skill types, to get an idea of what skills do. Unlike AI types, skill types aren't specifically defined by me - instead, the AI checks the skill's code directly to make a guess. The skill types are move (anything that moves the user), offensive (does damage), support (buffs allies), and unknown (none of the above). Skills that both move and attack (e.g. Ram) fall under the offensive category.
During the enemy turn, enemies act in the order that they're stored in by the game, which is basically random. When an enemy acts, it looks through all of its usable skills and gives a score to each, using both the AI Type and the skill type to decide how much it wants to use that skill right now. Then, it uses the skill with the highest score. If it has to select one or more tiles, it will give each tile a score as well (again using the AI Type and skill type to decide) and pick the highest one. Then it repeats this process until there are no more skills it can use.
Some extra notes on this system:
- When an enemy is choosing between two golems for an offensive skill, it will always pick the one with the least health. (This is about the only smart strategic choice they're capable of.)
- Sometimes, a skill will get a negative score if using it would be worse than nothing (e.g. using Strike when the only adjacent units are your teammates.) If all of its remaining skills have negative scores, the enemy will skip using any of them.
Phew! Like with the map generation, this ended up being more advanced than you'd think for a quickly-thrown-together system. The problem is, code being "advanced" is usually not a good thing. Here are just a few of the problems with the current AI:
- It requires too much hardcoding. The X-Cavator and Gem Lady bosses each required me to add new AI Types just so that they could make semi-smart use of their special abilities. I don't want to have to keep adding more and more exceptions as I add more enemies.
- It looks through each enemy and each skill in a fixed order and decides "do I want to use it or not"? It doesn't consider times where it might be smarter to use skills, or have enemies act, in a different order.
- The way that it scores which skill is best is a holdover from v0.1, where it could only use one skill per turn. Now, scoring them is kind of pointless, since it gets to use them all anyway. The only exception is the negative scores, where it's smarter to not use the skill at all.
So, how would I solve all of these problems? I don't want to have to specifically tell the game what each skill/passive ability does - that would greatly increase the amount of work I have to do to add anything new. Instead, I'd like to be able to add a new feature and immediately have the AI be able to know how it works and how best to use it. That's where the game tree comes in!
Game Tree
A game tree is a way of mathematically modeling all the different things that could happen in a turn-based game. It's used to make AI for all sorts of games, including Tic-Tac-Toe, Chess, Go, Checkers, and way, way more. You start with an initial board state, and consider all of the things that could happen on the first turn (for example, in Tic-Tac-Toe, you consider each of the nine places an X could go.) Then, for each of those possibilities, you consider all of the things that could happen after that (so, the eight places an O could go, depending on where the X went). This keeps going for as long as you want. For some simple games, like Tic-Tac-Toe, a computer can build the entire game tree, which is why Tic-Tac-Toe is a solved game - we know all of the possibilities for how the game could end up. For other games, like chess, the game tree is so unimaginably huge that no computer could possibly figure out the entire thing.
Game trees are very convenient for AI because you don't need to worry about actually knowing what an action does. Instead, you just try doing it, on an imaginary fake board, and add that board to your game tree. Then, try all of the actions that could happen after that, and make a fake board for all of them too. Keep doing this until your fake board ends up in a state where a player has won or lost the game. Now, you essentially have a map showing you what actions will lead to victory! (The reason it's called a "game tree" is because this "map" looks sort of like a tree, where each board has "branches" representing the other boards that could happen after it.)
So, it's time to scrap all that old, cobbled-together AI code and have the AI start searching the game tree to find its best move instead. Let's see how it goes!
---------------------------
I know what you’re probably thinking. “Oh, those dashes mean Brian worked for two hours or so, and now he’s going to keep writing and tell me what happened!” Nope! I’ve been trying to get this new AI code working for over a week! If you’re wondering why this devlog took so long to put out, now you know.
The main issue was that I needed to be able to run a simulated battle, i.e. figure out what would happen when taking a certain action, without actually affecting the board. The game code wasn’t initially designed to do anything like this, so there were all sorts of bugs where the simulation would have side effects on the real board, or different simulations would get their data mixed up, or a skill wouldn’t work properly when simulated. With some help, I was finally able to get most of the kinks sorted out and come up with a working game-tree-based AI. Here it is in action!
How It Works, Sort Of
(Note: there’s a lot of research out there on making AIs using game trees. If you’re interested, you can find more information on Wikipedia or elsewhere on the web.)
You may have guessed this already, but this AI doesn’t make a map of the entire game tree. That would be way, way too big for the computer to figure out (much like with chess), and in some cases it may even be infinite. Instead, I compromise by limiting the AI’s search depth, which is how many moves ahead it will think. Currently, it thinks four moves ahead, and it only considers the actions it might take within its own turn. (That is, it decides where it should use its skills and in which order, for this turn only.)
So if the AI only looks ahead for its turn, how does it know it’ll win the fight? Well, I give it some hints in the form of a board score, which is a rough metric for “am I doing well or badly?” It considers all of the things it could do in the next four actions, and picks the combination that gives it the best score. The board score is calculated by giving each unit its own value, which is 2 times its health plus 2 times its attack plus the number of skills it knows. The final score is the sum of allied piece values minus the sum of opposing piece values. That way, the monsters will attack player units wherever possible, prioritizing units they can kill and targeting stronger units first.
You might think that it would be easy for the AI to plan four moves ahead, but it’s still far too slow at times. The problem is that searching the tree is exponential - if there are eight possible actions to choose from, then searching one more move ahead will take eight times longer, and searching two more moves ahead would take 64 times longer, and so on. And eight is a conservative guess - depending on how many enemies there are, the number of possible actions could be 20 or more. So, unless I want to make my players wait three hours every time they hit “End Turn”, I’ll want some way to speed this up a bit.
In real life, to prevent trees from growing too big, you prune them. In computer programming, you do the same thing. (Except, you know, using code instead of shears.) In my case, I adjusted my code so that the board score would be checked at every step (not just the fourth one) and only the boards that tied for the best score would be considered. This cuts off several large “branches” of the tree, and should significantly increase the performance of the algorithm. On top of that, I reduced its search depth from four moves ahead to three. (This should still cover most of the clever tricks I’d want it to find.)
Conclusion
Despite how long it took me to sort this out, I still haven’t gotten it completely to my liking. When many enemies are on the board, there’s a noticeable pause of a few seconds when you end your turn, which is the sort of thing that gets really annoying after a while. That being said, the details of optimizing are boring, so I probably won’t bother to include that in another devlog.
It’ll still be a while before the next major update is ready, but I hope you’ll enjoy it when it finally comes around. As always, thank you for reading!
Get Golemancer
Golemancer
Build a team of golems to make it through rooms filled with monsters!
Status | In development |
Author | Brian_Nether |
Genre | Strategy |
Tags | 2D, Godot, Indie, Pixel Art, Roguelike, Singleplayer |
Languages | English |
More posts
- How I Draw Units22 days ago
- Patch notes - v0.2.173 days ago
- v0.2 available now!77 days ago
- The First-Approach Problem92 days ago
- Making a Better MapAug 10, 2024
- v0.1 RetrospectiveAug 09, 2024
Leave a comment
Log in with itch.io to leave a comment.