A Simple A* Pathfinding Algorithm


Pathfinding is a fundamental tool commonly used to move characters around a map intelligently. Here I will be going over the A* algorithm written in C++.

It’s important to visualize how the pathfinding algorithms search for the most efficient path. Here is a neat demo using a JavaScript library called Pathfinding.js. By playing with Pathfinder.js, you should notice that the algorithm first tries to go straight for the end point and only when it hits a blocked node will it look for nodes that follow a less direct path.

The Algorithm

In a nutshell, the A* algorithm uses a graph of nodes that have a weight associated to them used to weigh paths from the start node to the end node.

What weight represents is arbitrary, since A* can be applied to different scenarios. For navigating through a map of squares for example, this value could be based on distance and steps taken. The algorithm processes neighboring nodes, adding all neighbors to an “open list” and then moving on to the lowest cost node in that list.

Given the open list begins with the start node (opposed to be empty), the main loop of the algorithm looks like this:

  1. Get next node from open list (now known as current node)
  2. Skip node if in closed list or out of bounds
  3. Add current node to closed list
  4. Remove current node from open list
  5. Process neighboring nodes (North, West, South, East)
    1. Set current node as parent of neighbor if weight is less

In all, the algorithm uses three lists:

  • Open List:  Keeps track of which nodes to search next and is ordered by weight
  • Closed List: Al nodes that have already been searched
  • Path: Linked list that connects the end node to the start node

To clarify on the path, there are actually multiple paths that exist because less efficient paths may be found before the most efficient path. The end node will have the final path, and that is all we care about.

Implementation

Next we’ll go over the structures and functions involved with this algorithm. One core structure is the node.

Node Structure

As mentioned before, this is implemented in C++ and the graph is comprised of nodes.

Here is the Node structure with only its members:

struct Node {
    int x;
    int y;
    int index = 0;
    int distance;
    bool blocked = 0;
    int weight = 0;
    Node *parent = NULL;
};

I use indices to keep track of nodes. This includes determining which node is the start and end, as well as which nodes have already been processed (closed list) and which nodes should be processed next (open list).

Note: The index  struct member is there for convenience, so a coordinate to index lookup doesn’t need to be run every time.

The path list mentioned before is a linked list created through the parent  struct member, which points to another node.

Grid and List Structures

Nodes are stored as pointers in a vector. The open list is implemented as a multiset with the comparison function, NodeComp, so sorting is automatically done. Lastly, the closed list is implemented as a vector of ints that stores the indexes.

typedef std::multiset<Node*, NodeComp> NodeList; // So we don't need to type out the full definition everywhere

NodeList open_list; // Node pointers to process
std::vector<int> closed_list; // Node indices already processed
std::vector<Node*> grid; // All nodes

RPG Slots Progress – Cocos2d Review


rpgslots

I started developing a slots game using the Cocos2d C++ SDK. Cocos2d is an opensource game development framework that support C++, Objective-C, and JavaScript. Eventually, this game will evolve into a slots RPG, like King Cashing and King Cashing 2. For now it uses a pretty generic reels and a match 3 mechanic that matches on the same row as well as cells on adjacent rows. For score, it keeps experience points, since this will transition to the RPG slots.

Source for the project can be found here. As of this writing, it’s in early development.

Like many game frameworks, Cocos2d has many helper functions that allow for quick game prototyping. Scenes are easy to construct, and assets, sprites, and audio can be added using built in Cocos2d objects. It even supports the ability to add custom shaders.

Extending Sprites

I found quickly that I needed to create custom objects that extended sprites. In this project there two classes that extend cocos2d::sprite ; the reel, and the HUD.

Grouping elements within sprites helped with organizing code and separation of concern. I did run into strange memory errors when trying to add certain objects, such as cocos2d::Label , directly to the scene while also having a pointer to it in the scene.

Autorelease

The Cocos2d C++ framework uses a smart-pointer technique to automatically destroy dynamically allocated objects when its internal reference count is 0. This relieves pressure in remembering to destroy objects and worrying about pointer ownership. Though cyclical dependencies still need to be avoided.

The built-in Cocos2d objects are automatically added to the autorelease pool, so there is no need to use the new  keyword. In my project, I have an object that extends the Cocos2d sprite. So there’s some boilerplate code that I needed to add so my object would be added to the autorelease pool.

ReelSprite* ReelSprite::create(const std::string& filename, std::vector<int> _cells)
{
    ReelSprite* mainSprite = new ReelSprite();

    mainSprite->init(filename);
    mainSprite->autorelease(); // <-- ADD TO AUTORELEASE POOL
    mainSprite->cells = _cells;

    return mainSprite;
}

ReelSprite::create  is a static method that follows the Cocos2d convention of constructing an object and adding it to the autorelease pool. mainSprite->autorelease()  is the line that actually adds the object to the autorelease pool, so that it does not have to be manually destroyed.

Screen View to World Coordinates


I needed a map editor with more features than what I saw included in TILED back in August, so I decided to try my own hand at creating a map editor. It’s just an interactive grid, right? Not quite. At least in the approach I took.

I started writing the tile editor with C++ and SDL. Implementing drag functionality was pretty easy since that was baked in the SDL API, however, I didn’t want to build the UI widgets from scratch. Unfortunately, the existing UIs I found weren’t compatible with SDL, so I had to pivot and use straight OpenGL and matrix math.

Because I was ditching the SDL framework, I had to implement my own drag logic, which is what I will discuss in this post.

Moving Objects with Mouse Picking

I needed the ability to select objects in 3D space, which lead me to a technique called mouse picking. This technique utilizes ray casting, which is how you detect if a line (a ray) intersects with something else.

The article “Mouse Picking with Ray Casting” by Anton Gerdelan helped explain the different planes/spaces and what they represented.

In order to move the objects in 3D space at a distance that matched the mouse movement, I had to transform the coordinates between screen and world spaces. When working with 3D coordinates, there are several spaces or planes that have their own coordinates.

A very simplified list of these spaces are:
Screen Space > Projection (Eye) Space > World Space > Model Space.

newtranspipe
Anton Gerdelan’s Mouse Picking with Ray Casting

Fully understanding the transformation formula was a challenge for me. Normalization and calculating the inverse Projection Matrix tripped me up due to a combination of confusion and erroneous input.

The Solution

Here are some code examples of the final working solution.

Initialization of Projection, View, Model

Projection = glm::perspective(1.0f, 4.0f / 3.0f, 1.0f, 1024.1f);
GlobalProjection = Projection;

CameraDistance = 10.0f;
GlobalCameraDistance = CameraDistance;

View = glm::lookAt(
glm::vec3(0, 0, CameraDistance), // Camera location
glm::vec3(0, 0, 0), // Where camera is looking
glm::vec3(0, 1, 0) // Camera head is facing up
);
GlobalView = View;

Model = glm::mat4(1.0f);
GlobalModel = Model;

MVP = Projection * View * Model; // Matrix multiplication

View to World Coordinate Transformation
// SCREEN SPACE: mouse_x and mouse_y are screen space
glm::vec3 Application::viewToWorldCoordTransform(int mouse_x, int mouse_y) {
    // NORMALISED DEVICE SPACE
    double x = 2.0 * mouse_x / WINDOW_WIDTH - 1;
    double y = 2.0 * mouse_y / WINDOW_HEIGHT - 1;

    // HOMOGENEOUS SPACE
    glm::vec4 screenPos = glm::vec4(x, -y, -1.0f, 1.0f);

    // Projection/Eye Space
    glm::mat4 ProjectView = GlobalProjection * GlobalView;
    glm::mat4 viewProjectionInverse = inverse(ProjectView);

    glm::vec4 worldPos = viewProjectionInverse * screenPos;
    return glm::vec3(worldPos);
}

At first this algorithm felt a bit magical to me. There were things going on I wasn’t entirely wrapping my head around, and when I stepped through the algorithm I got lost at the inverse matrix multiplication. In addition, the “Mouse Picking” article normalizes the world space values, which we don’t need.

Out of Phase: Genre Considerations


This is part of a series for the Out of Phase game project that reflects on various stages, covering pros and cons of the creative process and implementation of the game. The first post can be found here.

Deciding what type of game you’re creating can be difficult while the concept is in its infancy. There’s so many ideas you want to use, and many of those may conflict with each other at first, requiring some compromise in order for them to gel

In Out of Phase, there was a struggle to combine puzzle and action mechanics.I was trying to integrate elements from the two genres that I knew were successful by themselves, but incompatible with each other. To resolve this, I had to consider what experience I wanted to give the player and how the mechanics from each genre would contribute to that experience.

Some ideas I had to let go of, as they were too far from the vision, and other ideas had to be reworked to fit the core game concept. Here’s a reflection of that journey.

Puzzles

First I’ll start off with what kind of puzzler this is not, but what I originally thought it was going to be.

Some of my favorite games are point-and-click puzzlers, such as Myst or Escape the Room games such as Crimson Room. In these types of games, the player can progress at their leisure. While there may be action sequences, they usually don’t require interaction from the player, though there are some exceptions where the player must make a timed decision during the action sequence. The timing is typically pretty lax however.

Games like Myst, Crimson Room, or even The 7th Guest are what I would consider leisure puzzlers. They are typically slower paced compared to an action game and focus more on immersion and an experience. Death in these games are pretty rare, and are based off of a decision and not action, so the player is given a high level of safety while exploring the worlds. The focus of these games is on immersing the player into a fantasy world, and death or abrupt twitch mechanics tend to draw the player out.

While these games are fun, they weren’t the style I was looking for. Instead, I wanted to go with something more real time and physical, like Maniac Mansion or the more recent Ib. I wanted to give the player a different feeling of tension that they may need to react fast to avoid getting injured or killed, which deviated from leisure puzzlers.

This was a point of conflict in my early design brainstorming, because I liked the pacing and immersion of the leisure puzzlers. However, every time I tried to settle on removing action from the game, it felt incomplete. So I moved onto a different type of puzzler, one which was more physical and time sensitive.

In the first prototype I started with a very basic series of chambers and hallways that contain puzzles. This is what was implemented at the the Global Game Jam, and I had the beginnings of what was like a Portal clone, with pressure plates and objects that could be pushed onto them.

oop_barrel2

Where it differed from Portal (besides no portals!) is some objects and parts of the maps would be different between the players, in some cases requiring the players to communicate and discover the difference in order to complete the puzzle.

There would be furnished rooms with interactive objects, like a record player that would play music, light switches, and paintings. The player would need to interact with certain objects and in some cases complete a sequence in order to progress through the game.

With these ideas, this game was becoming more like Ib, where core gameplay involved adventuring through the levels and discovery. There would be some action sequences, but the player had to evade dangers, opposed to attacking those dangers.

The game concept already sounded fun, and there were so many possibilities for puzzles. Yet, there were some things that didn’t feel right. I didn’t want the player to be totally defenseless, I wanted to let them fight back. I also needed something that gave the game some replay value after the puzzles were figured out, so my focus began to shift.

Early March Brain Dump


It’s been a while since I’ve written a post, so I’m forcing myself to start making short updates (although short is difficult). I’ve been caught up in a line of projects since June where my free time has been slowly consumed whole with web applications and other things. There was an attempt to just take it easy and run a World of Warcraft guild, but that was short lived. That fleeting moment was enjoyed while it lasted. Hopefully I can pick it up again sometime, but it seems I need to focus on some other areas of my life and career before I can invest in leisure. I strive to bring the two together, but that is going to take time and more planning.

These past twelve months have been really interesting, and have given me some perspective. I got to experience projects as both a supporter and a leader, and while it hasn’t changed my opinions or position, it has made me appreciate how valuable communication, teamwork, and leadership are. This includes leaving the comfort of ones mind to understand another, as well as relying on trust when it is not possible to do so. Trust doesn’t mean abandoning communication, rather using a different mode. For example, communicating expectations, but leaving application to the other persons. It may sound simple, but shouldn’t be taken for granted.

The main project we worked on was successfully launched last month, followed by a rapid release of a sub-product a couple weeks later. It took a little over six months from technical planning to completed implementation. There were some interesting challenges. Firstly, this project had a hard deadline, which we typically don’t have. There was no room to push out milestones. Secondly, we started from scratch and with a completely custom framework (although later replaced). And lastly, our project lead fell ill mid-project at which point I took on the responsibility. Our team was amazing, and pulled through when defeat was all but inevitable. There was a sense of perseverance and tenacity held by everyone that made the project a success, without those attitudes we would have surely been behind by a good month, if not more.

Anyway, more updates to come. I’ll be adding some posts about the framework we implemented for this latest endeavour, along with some other stuff I’ve been meaning to post.