RPG Slots Progress – Cocos2d Review


I started developing a slots game using the Cocos2d C++ SDK. Cocos2d is an opensource game development framework that support C++, Objective-C, and JavaScript. Eventually, this game will evolve into a slots RPG, like King Cashing and King Cashing 2. For now it uses a pretty generic reels and a match 3 mechanic that matches on the same row as well as cells on adjacent rows. For score, it keeps experience points, since this will transition to the RPG slots.

Source for the project can be found here. As of this writing, it’s in early development.

Like many game frameworks, Cocos2d has many helper functions that allow for quick game prototyping. Scenes are easy to construct, and assets, sprites, and audio can be added using built in Cocos2d objects. It even supports the ability to add custom shaders.

Extending Sprites

I found quickly that I needed to create custom objects that extended sprites. In this project there two classes that extend cocos2d::sprite ; the reel, and the HUD.

Grouping elements within sprites helped with organizing code and separation of concern. I did run into strange memory errors when trying to add certain objects, such as cocos2d::Label , directly to the scene while also having a pointer to it in the scene.


The Cocos2d C++ framework uses a smart-pointer technique to automatically destroy dynamically allocated objects when its internal reference count is 0. This relieves pressure in remembering to destroy objects and worrying about pointer ownership. Though cyclical dependencies still need to be avoided.

The built-in Cocos2d objects are automatically added to the autorelease pool, so there is no need to use the new  keyword. In my project, I have an object that extends the Cocos2d sprite. So there’s some boilerplate code that I needed to add so my object would be added to the autorelease pool.

ReelSprite::create  is a static method that follows the Cocos2d convention of constructing an object and adding it to the autorelease pool. mainSprite->autorelease()  is the line that actually adds the object to the autorelease pool, so that it does not have to be manually destroyed.

Screen View to World Coordinates

I needed a map editor with more features than what I saw included in TILED back in August, so I decided to try my own hand at creating a map editor. It’s just an interactive grid, right? Not quite. At least in the approach I took.

I started writing the tile editor with C++ and SDL. Implementing drag functionality was pretty easy since that was baked in the SDL API, however, I didn’t want to build the UI widgets from scratch. Unfortunately, the existing UIs I found weren’t compatible with SDL, so I had to pivot and use straight OpenGL and matrix math.

Because I was ditching the SDL framework, I had to implement my own drag logic, which is what I will discuss in this post.

Moving Objects with Mouse Picking

I needed the ability to select objects in 3D space, which lead me to a technique called mouse picking. This technique utilizes ray casting, which is how you detect if a line (a ray) intersects with something else.

The article “Mouse Picking with Ray Casting” by Anton Gerdelan helped explain the different planes/spaces and what they represented.

In order to move the objects in 3D space at a distance that matched the mouse movement, I had to transform the coordinates between screen and world spaces. When working with 3D coordinates, there are several spaces or planes that have their own coordinates.

A very simplified list of these spaces are:
Screen Space > Projection (Eye) Space > World Space > Model Space.


Anton Gerdelan’s Mouse Picking with Ray Casting

Fully understanding the transformation formula was a challenge for me. Normalization and calculating the inverse Projection Matrix tripped me up due to a combination of confusion and erroneous input.

The Solution

Here are some code examples of the final working solution.

Initialization of Projection, View, Model

View to World Coordinate Transformation

At first this algorithm felt a bit magical to me. There were things going on I wasn’t entirely wrapping my head around, and when I stepped through the algorithm I got lost at the inverse matrix multiplication. In addition, the “Mouse Picking” article normalizes the world space values, which we don’t need.

An Introduction to React

I’ve been working with React a bit lately and wanted to document my experience and findings. Since there’s quite a few “hello world”/getting started guides out there already, I’ll provide links to those and cover key points I took away from articles and experience.

When React was announced around mid-2013, it looked like an interesting concept and seemed like it should be pretty significant considering it was being maintained by Facebook. However, it fell off my radar soon after. as It was too early to use in any of my projects. Looking at where React stands now in addition to supporting libraries, I can see  that the scene has matured and stabilized quite a bit.

What is React?

React is a JavaScript library created by Facebook that fulfills the functionality of the view in MV*C. It stands by itself and aligns with the “single responsibility” principle that is the first of the five S.O.L.I.D object-oriented programming and design principles. Other concerns such as routing, controller, and model/stores are separate patterns that exist outside of React.

Because of this separation, React can fit into other frameworks or be pieced together with other libraries to make a complete MV*C framework, such as Flux (see which Flux implementation should I use on history/development of Flux), Redux, or Cerebral.

React is designed to be scalable and fast. Some patterns like its root level event listener are intended to make React fast, and other features, such as the virtual DOM, use encapsulation to reduce or eliminate common problems with scaled web applications caused by conflicting class/ID names and DOM manipulation side effects.

Also worth noting, React was split into react-dom and  react-native to independently support browser and mobile apps. For this article, I’ll be covering what is referred to as react-dom, which is React for the browser.

Learning Curve

Working with React and its ecosystem has been interesting. Given you’re already familiar with JavaScript, React itself isn’t that difficult to comprehend. Though it does take some time to understand the concepts React is founded on and what problems it is addressing. For example, the data flow design may take some time to get used to.

The challenges with React more lie in the ecosystem, when third-party libraries and the build process come into play. However, the libraries are worth getting familiar with, and the build process should become less confusing as things settle down. For now, there are plenty of boilerplate start projects on github.

Getting Started

The first step is to take it easy and not get overwhelmed. There are a collection of technologies that make a React stack, and they don’t have to be learned at once, I recommend first focusing on React and JSX first. By understanding the core concepts and working with plain React is a good start.

React’s homepage goes over the fundamentals of building a React app from scratch without diving too far into the tool chain. A starter kit can be downloaded from their getting started guide.

Most examples require compiling the HTML like markup called JSX into JavaScript. This can be done through a browser version of the Babel library called Babel Standalone. However I’ve found this makes debugging difficult, because I can’t set accurate breakpoints. Compiling the application outside of the browser as part of the build process is recommended.

The following is a simple “Hello World” component defined with a pure function. For components that don’t hold state (covered further down), pure functions are preferred over react factories, such as React.createClass or React.createComponent.

See the Pen React Hello World by Justin Osterholt (@hattraz) on CodePen.0

Out of Phase: Genre Considerations

This is part of a series for the Out of Phase game project that reflects on various stages, covering pros and cons of the creative process and implementation of the game. The first post can be found here.

Deciding what type of game you’re creating can be difficult while the concept is in its infancy. There’s so many ideas you want to use, and many of those may conflict with each other at first, requiring some compromise in order for them to gel

In Out of Phase, there was a struggle to combine puzzle and action mechanics.I was trying to integrate elements from the two genres that I knew were successful by themselves, but incompatible with each other. To resolve this, I had to consider what experience I wanted to give the player and how the mechanics from each genre would contribute to that experience.

Some ideas I had to let go of, as they were too far from the vision, and other ideas had to be reworked to fit the core game concept. Here’s a reflection of that journey.


First I’ll start off with what kind of puzzler this is not, but what I originally thought it was going to be.

Some of my favorite games are point-and-click puzzlers, such as Myst or Escape the Room games such as Crimson Room. In these types of games, the player can progress at their leisure. While there may be action sequences, they usually don’t require interaction from the player, though there are some exceptions where the player must make a timed decision during the action sequence. The timing is typically pretty lax however.

Games like Myst, Crimson Room, or even The 7th Guest are what I would consider leisure puzzlers. They are typically slower paced compared to an action game and focus more on immersion and an experience. Death in these games are pretty rare, and are based off of a decision and not action, so the player is given a high level of safety while exploring the worlds. The focus of these games is on immersing the player into a fantasy world, and death or abrupt twitch mechanics tend to draw the player out.

While these games are fun, they weren’t the style I was looking for. Instead, I wanted to go with something more real time and physical, like Maniac Mansion or the more recent Ib. I wanted to give the player a different feeling of tension that they may need to react fast to avoid getting injured or killed, which deviated from leisure puzzlers.

This was a point of conflict in my early design brainstorming, because I liked the pacing and immersion of the leisure puzzlers. However, every time I tried to settle on removing action from the game, it felt incomplete. So I moved onto a different type of puzzler, one which was more physical and time sensitive.

In the first prototype I started with a very basic series of chambers and hallways that contain puzzles. This is what was implemented at the the Global Game Jam, and I had the beginnings of what was like a Portal clone, with pressure plates and objects that could be pushed onto them.


Where it differed from Portal (besides no portals!) is some objects and parts of the maps would be different between the players, in some cases requiring the players to communicate and discover the difference in order to complete the puzzle.

There would be furnished rooms with interactive objects, like a record player that would play music, light switches, and paintings. The player would need to interact with certain objects and in some cases complete a sequence in order to progress through the game.

With these ideas, this game was becoming more like Ib, where core gameplay involved adventuring through the levels and discovery. There would be some action sequences, but the player had to evade dangers, opposed to attacking those dangers.

The game concept already sounded fun, and there were so many possibilities for puzzles. Yet, there were some things that didn’t feel right. I didn’t want the player to be totally defenseless, I wanted to let them fight back. I also needed something that gave the game some replay value after the puzzles were figured out, so my focus began to shift.

Out of Phase: Global Game Jam 2015

This is the first post in a series that will reflect on the project from various stages, covering pros and cons of the creative process and implementation of the game. This review is a long time coming, it was originally started right after the 2015 Global Game Jam.

In January I broke a lull by participating in the annual 48 hour Global Game Jam. Prior to this jam, it had been some time since prototyping a game of my own. Game jams in general are a great opportunity to break creators block and start fresh.

I started a project called Out of Phase. The original idea was to create a two player game that involved a series of puzzles contained within chambers, similar to Portal 2. There was a twist, where the environment was slightly different between the two players, requiring them to communicate between each other to solve the puzzles.

At the end of the 48 hour jam, I produced the first version. While not complete, it still gave an example of the general concept with a couple puzzle examples. Two players were supported through a local co-op mode, where chracters were toggled by hitting the tab key.

For this post, I’ll give a general overview of the tools and design ideas that took place at the Global Game Jam.


It’s a good rule of thumb in game jams not to build your own framework. Your focus is on producing a game, not a toolset. Phaser.io is a real snazzy HTML5 game engine/framework. It comes with a tilemap loader, collision detection, WebGL support, and a plethora of other goodies to make life easier.

Another guideline with jams is to know your toolset so that you’re not wasting time figuring out how to use the tools. I only had a little experience with Phaser.io prior to the jam, so I didn’t follow this one completely. However being very experienced with JavaScript and the concepts that Phaser.io is built on, I was able to get started very quickly and iterate through ideas easily.

One problem I ran into early on was selecting the wrong physics engine for collision detection. This set me back a little, but helped me learn the differences between the P2 and Arcade systems. In Phaser.io, you can actually have more than one system active, so they’re not exclusive to each other.

Keep your server running with monitoring services

In a world of 99.999% up time, keeping a service running is a big deal. How do you compete? That is where monitoring and automated server management comes into play.

It is a good idea to use both a local and remote monitoring solutions, with the remote service being a fail-safe that will send out a notification when a website or service  is unreachable or has poor latency.

With remote monitoring, there are many options that will scale to different needs. For example, I use 24×7 by Zoho for basic port monitoring. This service will send a notification if an app is no longer reachable from the internet. There are many monitoring services out there, so it would be worthwhile to search around and compare.

The next step is something that runs locally, has more granular monitoring, and will take actions to resolve a problem when it is detected. Monnit will do just this. It is a daemon that runs on the server and monitors resources and processes. It has the ability to restart programs and send notifications under specific conditions, such as when memory or CPU consumption exceeds a given threshold, and low disk space. It can also detect a continually failing applications by tracking the PID.

Here are some configuration examples of monnit for Apache, MySQL, and SOLR. Comments have been added to describe what they do. Each example uses an alert directive, which requires a recipient to be configured. This is done by setting the following in the config file:

set alert example@email.com


## Custom Apache2 setup
check process apache2 with pidfile /var/run/apache2.pid
group www
start program = "/etc/init.d/apache2 start"
stop program = "/etc/init.d/apache2 stop"

# Send alert if Apache isn't listening to specified port
if failed host localhost port 80 then alert

# Restart daemon if children processes > 250
if children > 250 then restart

# Alert if load avg stays high with given criteria
if loadavg(5min) greater than 80 for 8 cycles then alert

# Stop trying to restart daemon if restarts aren't working
if 3 restarts within 5 cycles then timeout


## Custom MySQLD setup
check process mysqld with pidfile /var/run/mysqld/mysqld.pid
group root
start program = "/etc/init.d/mysql start"
stop program = "/etc/init.d/mysql stop"

# Send alert if MYSQLD isn't listening to specified port
if failed host localhost port 3306 then alert


## SOLR Check
check process solr with pidfile /var/run/solr.pid
group root
start program = "/etc/init.d/solr start"
stop program = "/etc/init.d/solr stop"

# Send alert if SOLR isn't listening to specified port
if failed host localhost port 8983 then alert

# Restart daemon if SOLR isn't listening to specified port
if failed host localhost port 8983 then restart

# Stop trying to restart if restarts aren't working
if 5 restart within 5 cycles then timeout

In each of these, Monnit has at least a check that the app is listening on a designated port. If it is not, a restart of the service is attempted. With Apache, if it is running too many children, the service will be restarted to fix this. (Note: Apache does have a setting in Apache conf that set max children threads that should help avoid triggering the children processes check) In some cases the service will be shutdown if it is running hot for too long.


Keeping a daemon running and gathering information about it before something goes wrong is crucial in maintaining a quality application or service. Monitoring tools like 24×7 and Monnit make this easier and are a must on any IT toolbelt.

Manage Your Daemons With Upstarts

I am finding the need for custom Linux service scripts more and more. This is in the case where a program I want to run in the background does not already have one for one reason or another. Maybe I’m compiling instead of using apt, or sometimes I am creating my own app.

In the past I’ve used the traditional init script format that exists in /etc/init.d. This proved to be tedious. Unless I already had the init script on hand, I would need to code it out in bash script, like so:

# Debug SMTP Service
case "$1" in
    if [ -f "/var/run/debugsmtp.pid" ]
        echo "Service already running"
        echo "Starting service..."
        python -m smtpd -n -c DebuggingServer localhost:25 &
        echo "$!" > /var/run/debugsmtp.pid
        echo "Service started"
    if [ -f "/var/run/debugsmtp.pid" ]
        PID=<code>cat /var/run/debugsmtp.pid</code>
        kill -9 &quot;$PID&quot;
        rm /var/run/debugsmtp.pid

        if [ ! -f &quot;/var/run/debugsmtp.pid&quot; ]
            echo &quot;Service stopped&quot;
        echo &quot;Service not running&quot;
echo &quot;Service&quot;
echo $&quot;Usage: $0 {start|stop|status}&quot;
exit 1
exit 0

As you can see, there are sections to handle different commands, in this case it is just start and stop. With other programs there may also be restart, status and reload.

Start checks if the process is already running by checking if a PID file exists. If it does, then the service is assumed to already be running. If not, the script goes on to start the process and stuffs the process ID into a newly created pid file at the same location.

Stop is similar, as in it works on the PID file. It will check if the process is running by checking the pid file, and if it does, it will run a kill command on the process.


Setting this up for multiple systems is a pain, and it is more technical than I would like to be doing on a routine basis. Specifically, managing processes with PIDs and using the kill command makes me a little nervous.

But there are other problems I could run into that would run into some shortcomings of init.d. One being the ability to base the daemon’s start on the network interface being up, or the filesystem being ready. This would apply to applications such as web and SMTP servers.

Enter upstart.

Upstart came into play in 2006 (at least in the Ubuntu world). It is a replacement of the previously mentioned init system where scripts are placed in /etc/init.d and /etc/rc*.d folders. It has a more accurate boot sequence through event based start up, and take less effort to implement due to some tasks like PID management being automatically handled.

Upstarts are configured with stanzas.  Two of those being start and stop, where you use runlevels or events (network up, filesystem ready) to define when the daemon should be started and shutdown. When it’s all set it looks like this:

description "nginx http daemon"

# Start daemon when filestyem and network interface is up
start on (filesystem and net-device-up IFACE=lo)

# Stop daemon when runlevel is not specified levels
stop on runlevel [!2345]

# Daemon binary location
env DAEMON=/usr/sbin/nginx

# Daemon pid location
env PID=/var/run/nginx.pid

# Indicate daemon has child processes
expect fork

# Restart if daemon ends prematurely

# Max respwans
respawn limit 10 5

# Commands to run before daemon starts
pre-start script
if [ $? -ne 0 ]
then exit $?
end script

# Run daemon
exec $DAEMON


So there you have it. Upstarts make life easier by cutting your daemon initialization scripts in half with more control, and less being in the weeds. It’s ideal for custom or compiled applications (packages often install their own init scripts) and removes the need to boot up a daemon every time you start your machine.

A S.O.L.I.D Design

As mentioned in a previous post, I just recently led the launch of a search focused web application. It’s time now to reflect a bit on the techniques and technology used.

From the start I was looking for a supporting framework that was conducive for rapid development that did not sacrifice stability,  integrity, or consistency. I wanted a set of tools that not only allowed our developers to quickly build out features, but also avoid getting stuck “in the trenches” building out core function. My approach to accomplish this was leveraging a collection of object-oriented  principles, frameworks, and debugging tools. I’m going to break this up between those three since they are each interesting and important, starting with the principles I used for the project.


For me, commonly accepted design theories are ideas put to practice that have been vetted and adopted by the consensus. While it’s still important to be innovative and be a free-thinker, I believe in standing on the shoulders of giants; building from what is known to work. This does not limit the ability to be creative or do something different, but instead empowers by laying out benefits and avoided caveats through comprehending the principles.

Object-oriented design can be nebulous and feel vague, but this is the spirit of object-oriented architecture. Concepts are  abstract and isolated, which allow them to be  independently combined to make a whole. What’s important to take away is they are a means to an end: a solidly structured application that can efficiently be extended and maintained.

With that said, it is time to go over the concepts I went with, which can be place into three groups: S.O.L.I.D, MVC, and ORM. Both MVCs and ORMs seem to follow the S.O.L.I.D pattern, so there is overlay, but that doesn’t mean they are required to follow any of the S.O.L.I.D principles. As said before, object-oriented concepts are intended to be independently applied.


S.O.L.I.D has been around for about a decade. Practice of these concepts are prominent in many areas of application development. The easiest to identify for me are Java frameworks such as Struts and Spring, but I can also see  partial application in front-end web development, starting with the separation of HTML, JavaScript, and CSS. More modern JavaScript frameworks have carried the torch, achieving full application, and in turn evolving web pages into full-fledged web applications. While the application I built was server-side oriented, these concepts may still be applied to front-end web applications.

To me, what this all means is you have a system comprised of objects that each have a unique role.  They play nice with each other, and don’t get greedy and take over another components role. This is an awesome design pattern, because it keeps roles encapsulated and extensible. It reduces the chances of rogue code lying in wait. And by separating out roles and keeping objects decoupled, it is much easier to build new features without modifying core code that would results in testing and bugs.


Organizing the three key layers of a web application is essential to to keeping an orderly, and reusable codebase. Logic can be separate into at least three basic “buckets” by role: Model, View, and Controller. This structuring isn’t meant to be taken as absolute, and does not directly translate into a specific file structure.  There is code that will fall outside the model, controller, and view role,  such as components that handle routing and security. Instead, like other OO principles, it is a set of guidelines to help achieve a better codebase.

A basic web example of this is moving database queries into code that is under the model section of the application, and not mixing it in with HTML. Another example of  separating elements by role would be what became standard practice for front-end web development with the use of CSS, and external JavaScript files.


The interface in which data is accessed can dictate many design factors of the web application. An ORM allows data to be accessed and manipulated as normal objects. All business logic is encapsulated within the data object, instead of being strewn about the application. This means that data is accessed consistently throughout the application through a set of centralized objects and tools . While this concept introduces a level of complexity compared to straight queries or a light wrapper, the ability to work with data as a collection of objects is very powerful and clean.


This project was a test of how necessary it was to create a S.O.L.I.D application. The organization of code, and separation of roles had enormous benefits that kept me sane. Our course was not without trial and error. There were times these principles weren’t followed, and it resulted in fragile and inflexible components that haunted us later in the project.

There is a quote that I like that goes “There is no problem in computer science that cannot be solved by adding another layer of indirection, except having too many layers of indirection”. There is a conundrum of simplicity vs extensibility. Adding that extra layer all depends on what the desired endgame will be for the application, and it can be a difficult  judgement call.

Game Jam #7

Reeeally late in posting this, but I’m determined to post about my game jam back in July where the theme was time manipulation. The dynamic was a little different, I had an “idea person” to help move the creative process along and blocks. I was paired up with Jacq, sound engineer and creative mind. After a few iterations of rehashing the fundamentals of our game, we finally came up with a platformer that fit  the theme. From this experience I had some takeaways to apply to my next jam.

Idea People are AWESOME
Having a person to handle the brainstorming while you’re coding has its benefits. It’s easier to cut and run when hitting a wall, as the other person has already been thinking into alternatives, opposed to wasting time on something that just isn’t working because you don’t have any better ideas.

Skillsets can become dusty
This was a frustrating lesson to learn. After not touching Flixel for a month and a half, working with basics like movement and sprite placement was more difficult than it should have been.

There’s an easier way to prototype
My toolset has been pretty low level. I use ActionScript, and while I utilize the Flixel framework with libraries, it still requires a lot of coding. At the end of the jam I surmized I would have developed my prototype faster with a prototyping framework such as Construct 2 or Stencyl.  Both allow the rapid prototyping of platformers, such as this one without coding.

For those interested, the source and demo can be found here:

Early March Brain Dump

It’s been a while since I’ve written a post, so I’m forcing myself to start making short updates (although short is difficult). I’ve been caught up in a line of projects since June where my free time has been slowly consumed whole with web applications and other things. There was an attempt to just take it easy and run a World of Warcraft guild, but that was short lived. That fleeting moment was enjoyed while it lasted. Hopefully I can pick it up again sometime, but it seems I need to focus on some other areas of my life and career before I can invest in leisure. I strive to bring the two together, but that is going to take time and more planning.

These past twelve months have been really interesting, and have given me some perspective. I got to experience projects as both a supporter and a leader, and while it hasn’t changed my opinions or position, it has made me appreciate how valuable communication, teamwork, and leadership are. This includes leaving the comfort of ones mind to understand another, as well as relying on trust when it is not possible to do so. Trust doesn’t mean abandoning communication, rather using a different mode. For example, communicating expectations, but leaving application to the other persons. It may sound simple, but shouldn’t be taken for granted.

The main project we worked on was successfully launched last month, followed by a rapid release of a sub-product a couple weeks later. It took a little over six months from technical planning to completed implementation. There were some interesting challenges. Firstly, this project had a hard deadline, which we typically don’t have. There was no room to push out milestones. Secondly, we started from scratch and with a completely custom framework (although later replaced). And lastly, our project lead fell ill mid-project at which point I took on the responsibility. Our team was amazing, and pulled through when defeat was all but inevitable. There was a sense of perseverance and tenacity held by everyone that made the project a success, without those attitudes we would have surely been behind by a good month, if not more.

Anyway, more updates to come. I’ll be adding some posts about the framework we implemented for this latest endeavour, along with some other stuff I’ve been meaning to post.