Pathfinding: A* Search Algorithm

Karl Matthes
10 min readJul 5, 2020

A step up from Dijkstra’s algorithm is A* (read: “a star”). In terms of pathfinding, Dijkstra’s algorithm will want to try out every path and every vertex to find the shortest path between its starting point and destination, whereas A* has an extra attribute, a heuristic, that should allow it to find the shortest path without needing to check every path and vertex. Each algorithm has its use cases, but generally speaking, both Dijkstra’s and A* can find the shortest path, but A* will do it faster.

I would recommend knowing a bit about Dijkstra’s algorithm before going into this article, as there are a lot of points of comparison. Or better yet, read my last article on the subject! https://medium.com/swlh/pathfinding-dijkstras-algorithm-65e71c346629

Set-up

I’m going to need to set-up a few things for my examples, plus give a definition of what a heuristic is. To start, let me show you the type of graph I want to use for my examples.

Hardly a graph. More of a checkerboard.

This is a little abstract, but I want to use a checkered board as my graph, and I want to allow movement between its squares. If I overlay the equivalent graph with vertices and edges, it should look something like this.

That’s too lines. Maybe not so helpful…

Let’s take the center square/vertex, E, for example. It has eight neighboring vertices, so we can move up, down, left, right, and in all the diagonal combinations of each. And that’s really all I was trying to set-up here: I’m going to be using a checkerboard, and I will allow movement in eight directions. A little long-winded, but now let’s start looking at movement and the cost of movement.

Starting off simple, I’ll say that moving up, down, left, and right have a movement cost of 1. The board is getting busy, so I’ll extract the graph, and add that movement cost on as edge weights.

Because we are pathfinding, we can think of these weights as distance.

If these weights are distances, and if the relationships and neighbors are laid out like this, then geometry can tell us about the cost of diagonal movements. Using the Pythagorean Theorum, we should getting √(1² + 1²) = √2, which is roughly 1.41421356237… Which isn’t a very nice number to work with. So, I’ll steal an idea (and cite my sources!), and multiply and round the costs down. For both types of movement, I’ll multiple them by ten, and then truncate their decimal places. This will give us final costs of 10 for up, down, left, and right movements, and 14 for diagonal movements.

Nice round numbers for everything. This also mean that making diagonal movements are cheaper than making two non-diagonal movement to the same square (10 + 10 >14)

Heuristics

I feel like these are difficult for me to define, so I’ll start with Wikipedia’s opening line on the subject:

In computer science, artificial intelligence, and mathematical optimization, a heuristic (from Greek εὑρίσκω “I find, discover”) is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.

Is that useful? Do you get it? Because I don’t get it. I mean, I kinda get it, and it is an accurate description, but if this was the first you’d ever heard about a heuristic, I’m not sure you’d get the full picture of what heuristics are from this. Maybe if I just tell you about A*’s pathfinding heuristic, you’ll get a better grasp.

With Dijkstra’s algorithm, the focus was on one value: the shortest distance from the starting index. Which, when you think about it, is sort of strange. You have this algorithm trying to find new, shorter paths to some destination, but its focus is always on where it started. Imagine someone going out for groceries, but walking backwards the whole way, trying to keep an eye on their home until they got to the store. In real life, this probably wouldn’t work, but maybe if you gave them time to go all over town, it might eventually work.

Going off the this goofy analogy, A* is different from Dijkstra’s because it suggests turning around and walking forward. A*, like Dijkstra’s, also keeps track of how far from home it is, but its unique feature is that it keeps track of how far it is from its destination, too. When making its next step, Dijkstra’s will look at which path is currently the shortest, but A* goes a step further and also considers if that step is moving it closer to its goal. More simply, A* has a rough estimate of its remaining distance to its goal, and when that estimate gets closer to zero, it knows it’s moving in the right direction. This is the heuristic we’ll be using in our examples.

Stepping Through the Algorithm

This is going to be a rather simple example, but it should show you the basic flow and repetition of the algorithm. I’ll make sure to provide links in my references to other, more complicated examples, but this should be a good primer for those. I’ll use a set-up similar to how I demonstrated Dijkstra’s.

Let’s go over these components. First, we have a 4x3 board which will be our graph. Our goal will be to find the shortest path between A and L. I’m going to remove the letters once I start showing the steps, but here are their labels for now. Below the board are two sets used to keep track of the vertices. Unlike in Dijkstra’s, we don’t keep every vertex in a starting set (like “Unvisited”), but we add them to Open once they are discovered as neighbors to the current vertex. Finally, the table. A lot like Dijkstra’s, but with two more columns for the heuristic distance and the sum of the two distances.

One more difference between Dijkstra’s and A* is that Dijkstra’s gets to start looping immediately, but A* has to do a little set-up for its starting vertex. So let’s do that now, and get a bit of an idea of how the table is going to work.

First, we’ll add A to the Open set. Second, we’ll figure out A’s distance from the start. Naturally, the distance from A to A is zero, so that will be added to the table, and I’ll have that be the number in the top left corner of A’s square. Third, we’ll determine the heuristic distance. We can do this any number of ways, all that’s important is that we measure this distance the same way for each square. I’m going to be using a “as the crow flies” distance for all the heuristic distances, which means I can make use of the Pythagorean Theorem again. In this case, we’ll be calculating √(20² + 30²), and find that A is a distance of 36.0555127546… from L (I’ll round decimal places to the nearest tenth to save space). I’ll add this to the table, and place it in the top right corner of A’s square. Finally, the sum. I’ll add the shortest distance from the start to the heuristic distance, add it to the table, and place it in the center of A’s square.

That’s our starting point, and we can now look at A’s neighbors, and add their values to the table.

First, we’ll add B, E, and F to the Open set. Next, we can find their shortest distances from the start, and because everything is one step from the start, it’ll just be 10’s for B and E, and 14 for the diagonal move to F. For the heuristic distance, we’ll use the Pythagorean Theorem from each square to the ending vertex. Finally, we’ll get their sums, and add “A” to each of their previous vertex columns.

At this point, we’ve done everything we need to with A, so it will be moved to the Closed set, and we’ll need a new current vertex. To determine which vertex should be examined next, we’ll look at the Open set, and find the vertex that has the lowest sum. Right now, that is F with a sum of 36.1. So, we’ll make it the current vertex, and start working out the values for its neighbors.

F has eight neighbors, but only five are going to be changed here. Firstly, A is in the Closed set now, so it can’t be changed now. For the remaining seven, we’ll add them to the Open set (unless they’re already part of it), and then let’s work on their distances from the start. This is going to work a lot like Dijikstra’s algorithm, and we’ll be adding F’s distance from the start to each of its neighbors’s distance from F. On our table, F’s distance from start is 14, so we’ll be filling in 14+10=24 or 14+14=28 for these. However, B and E already have shorter distances to the start, so their table records do not get updated. That leaves five vertices that will be updated, with C, I, and K getting 28, and G and J getting 24 for their distance from start. Next, use the Pythagorean Theorem for each of these vertices’ heuristic distances, then calculate their sums, and finally add “F” to the previous vertex column for the vertices that were updated.

F is complete and will be moved to the Closed set, and a new current vertex will be chosen based on which vertex in the Open set has the lowest sum. It’s very close, but K is one tenth smaller (And yes, this is because of a rounding difference, but in this case, it turns out to be a good tie breaker). K will be the new vertex to examine, so let’s start by looking at its neighbors.

K has six neighbors, three of which are already in the Open set, so the last two H and L will be added to the Open set as well. Now, we’ll calculate their distances from start. K’s distance from start is 28, so we’ll be working with 28+10=38 for G, J, and L, and 28+14=42 for H. However, G and J already have smaller distances to start, so they will not be updated. We’ll find the heuristic distances for each, calculate their sums, and add “K” to each previous vertex that was updated.

K is complete and moved to the Closed set, and the next vertex in the Open with the smallest sum is L, our destination!

Now, that L is the current vertex, the algorithm ends, and we can use our table to trace the path back to the start:

  • L’s previous vertex is K
  • K’s previous vertex is F
  • F’ previous vertex is A, the starting vertex.

So, this means the shortest path is A > F > K > L.

Analyzing the Results

Compared to Dijkstra’s algorithm, A* has left quite a mess behind it. Let’s look at a few odd points. Starting off, look at vertex C and its distance to the start: 28. That seems weird because A and C are only two horizontal moves away from each other, so the shortest distance should be 20. Right now, C seems to think that its shortest path to A is two diagonal moves through F.

When Dijkstra’s algorithm finishes, it should provide a very complete table of distances from the starting point. Conversely, A*’s table has wrong values, and is completely missing vertex D. But it got the same answer that Dijkstra’s would have, and it did much faster. Dijkstra’s would’ve wanted to look at all 12 vertices before declaring it found the shortest path; A* only needed to look at 4 before it found the right path. This change in speed is because of the heuristic we were using: as often as we could, we wanted to be closing the distance with our destination. If we go back to Wikipedia’s definition for heuristics, A* has traded completeness for speed.

Let’s try another small example, but with an obstruction on the way to the destination.

Starting point is the top left vertex, and A* is trying to find the shortest path to the bottom right vertex. The black squares represent a gap in the graph and cannot be traversed. If A* is following its heuristic of “always get closer to the destination”, those black squares are going to lead it to a dead end.

At this point, A* will look for the smallest sums, which will be the squares just below and to the right of the starting vertex. A* will explore both paths, but the downward path will run into another dead end, and rightward path will find a way around the black squares. It might play out a little differently than this, but let’s assume this is how A* handled this scenario. Even when it got pulled into dead ends, it was able to go back and find another path, and it didn’t need to look at every vertex (the very top right vertex got skipped over again). This means that when A* star is running at its absolute worst, it’s basically as fast as Dijkstra’s normal performance. Each algorithm has perfectly valid use cases, but in terms of pathfinding and finding the shortest paths, A* is the better choice.

References:

These two are really great, and I combined parts of their primary demonstrations to make the example I used. I can’t say one is better than the other, and I highly recommend watching both. Also, demonstrating algorithms works so much better in a video format…

These next three are also good, but I took smaller parts and points out of them. Still, well worth watching them too.

Again, this is a great tool for visualizing and testing out algorithms.

--

--