Posts in Category: Engineering

Out of Phase: Genre Considerations

This is part of a series for the Out of Phase game project that reflects on various stages, covering pros and cons of the creative process and implementation of the game. The first post can be found here.

Deciding what type of game you’re creating can be difficult while the concept is in its infancy. There’s so many ideas you want to use, and many of those may conflict with each other at first, requiring some compromise in order for them to gel

In Out of Phase, there was a struggle to combine puzzle and action mechanics.I was trying to integrate elements from the two genres that I knew were successful by themselves, but incompatible with each other. To resolve this, I had to consider what experience I wanted to give the player and how the mechanics from each genre would contribute to that experience.

Some ideas I had to let go of, as they were too far from the vision, and other ideas had to be reworked to fit the core game concept. Here’s a reflection of that journey.

Puzzles

First I’ll start off with what kind of puzzler this is not, but what I originally thought it was going to be.

Some of my favorite games are point-and-click puzzlers, such as Myst or Escape the Room games such as Crimson Room. In these types of games, the player can progress at their leisure. While there may be action sequences, they usually don’t require interaction from the player, though there are some exceptions where the player must make a timed decision during the action sequence. The timing is typically pretty lax however.

Games like Myst, Crimson Room, or even The 7th Guest are what I would consider leisure puzzlers. They are typically slower paced compared to an action game and focus more on immersion and an experience. Death in these games are pretty rare, and are based off of a decision and not action, so the player is given a high level of safety while exploring the worlds. The focus of these games is on immersing the player into a fantasy world, and death or abrupt twitch mechanics tend to draw the player out.

While these games are fun, they weren’t the style I was looking for. Instead, I wanted to go with something more real time and physical, like Maniac Mansion or the more recent Ib. I wanted to give the player a different feeling of tension that they may need to react fast to avoid getting injured or killed, which deviated from leisure puzzlers.

This was a point of conflict in my early design brainstorming, because I liked the pacing and immersion of the leisure puzzlers. However, every time I tried to settle on removing action from the game, it felt incomplete. So I moved onto a different type of puzzler, one which was more physical and time sensitive.

In the first prototype I started with a very basic series of chambers and hallways that contain puzzles. This is what was implemented at the the Global Game Jam, and I had the beginnings of what was like a Portal clone, with pressure plates and objects that could be pushed onto them.

oop_barrel2

Where it differed from Portal (besides no portals!) is some objects and parts of the maps would be different between the players, in some cases requiring the players to communicate and discover the difference in order to complete the puzzle.

There would be furnished rooms with interactive objects, like a record player that would play music, light switches, and paintings. The player would need to interact with certain objects and in some cases complete a sequence in order to progress through the game.

With these ideas, this game was becoming more like Ib, where core gameplay involved adventuring through the levels and discovery. There would be some action sequences, but the player had to evade dangers, opposed to attacking those dangers.

The game concept already sounded fun, and there were so many possibilities for puzzles. Yet, there were some things that didn’t feel right. I didn’t want the player to be totally defenseless, I wanted to let them fight back. I also needed something that gave the game some replay value after the puzzles were figured out, so my focus began to shift.

Out of Phase: Global Game Jam 2015

This is the first post in a series that will reflect on the project from various stages, covering pros and cons of the creative process and implementation of the game. This review is a long time coming, it was originally started right after the 2015 Global Game Jam.

In January I broke a lull by participating in the annual 48 hour Global Game Jam. Prior to this jam, it had been some time since prototyping a game of my own. Game jams in general are a great opportunity to break creators block and start fresh.

I started a project called Out of Phase. The original idea was to create a two player game that involved a series of puzzles contained within chambers, similar to Portal 2. There was a twist, where the environment was slightly different between the two players, requiring them to communicate between each other to solve the puzzles.

At the end of the 48 hour jam, I produced the first version. While not complete, it still gave an example of the general concept with a couple puzzle examples. Two players were supported through a local co-op mode, where chracters were toggled by hitting the tab key.

For this post, I’ll give a general overview of the tools and design ideas that took place at the Global Game Jam.

Framework

It’s a good rule of thumb in game jams not to build your own framework. Your focus is on producing a game, not a toolset. Phaser.io is a real snazzy HTML5 game engine/framework. It comes with a tilemap loader, collision detection, WebGL support, and a plethora of other goodies to make life easier.

Another guideline with jams is to know your toolset so that you’re not wasting time figuring out how to use the tools. I only had a little experience with Phaser.io prior to the jam, so I didn’t follow this one completely. However being very experienced with JavaScript and the concepts that Phaser.io is built on, I was able to get started very quickly and iterate through ideas easily.

One problem I ran into early on was selecting the wrong physics engine for collision detection. This set me back a little, but helped me learn the differences between the P2 and Arcade systems. In Phaser.io, you can actually have more than one system active, so they’re not exclusive to each other.

Keep your server running with monitoring services

In a world of 99.999% up time, keeping a service running is a big deal. How do you compete? That is where monitoring and automated server management comes into play.

It is a good idea to use both a local and remote monitoring solutions, with the remote service being a fail-safe that will send out a notification when a website or service  is unreachable or has poor latency.

With remote monitoring, there are many options that will scale to different needs. For example, I use 24×7 by Zoho for basic port monitoring. This service will send a notification if an app is no longer reachable from the internet. There are many monitoring services out there, so it would be worthwhile to search around and compare.

The next step is something that runs locally, has more granular monitoring, and will take actions to resolve a problem when it is detected. Monnit will do just this. It is a daemon that runs on the server and monitors resources and processes. It has the ability to restart programs and send notifications under specific conditions, such as when memory or CPU consumption exceeds a given threshold, and low disk space. It can also detect a continually failing applications by tracking the PID.

Here are some configuration examples of monnit for Apache, MySQL, and SOLR. Comments have been added to describe what they do. Each example uses an alert directive, which requires a recipient to be configured. This is done by setting the following in the config file:

set alert example@email.com

APACHE

## Custom Apache2 setup
check process apache2 with pidfile /var/run/apache2.pid
group www
start program = "/etc/init.d/apache2 start"
stop program = "/etc/init.d/apache2 stop"

# Send alert if Apache isn't listening to specified port
if failed host localhost port 80 then alert

# Restart daemon if children processes > 250
if children > 250 then restart

# Alert if load avg stays high with given criteria
if loadavg(5min) greater than 80 for 8 cycles then alert

# Stop trying to restart daemon if restarts aren't working
if 3 restarts within 5 cycles then timeout

MySQL

## Custom MySQLD setup
check process mysqld with pidfile /var/run/mysqld/mysqld.pid
group root
start program = "/etc/init.d/mysql start"
stop program = "/etc/init.d/mysql stop"

# Send alert if MYSQLD isn't listening to specified port
if failed host localhost port 3306 then alert

SOLR

## SOLR Check
check process solr with pidfile /var/run/solr.pid
group root
start program = "/etc/init.d/solr start"
stop program = "/etc/init.d/solr stop"

# Send alert if SOLR isn't listening to specified port
if failed host localhost port 8983 then alert

# Restart daemon if SOLR isn't listening to specified port
if failed host localhost port 8983 then restart

# Stop trying to restart if restarts aren't working
if 5 restart within 5 cycles then timeout

In each of these, Monnit has at least a check that the app is listening on a designated port. If it is not, a restart of the service is attempted. With Apache, if it is running too many children, the service will be restarted to fix this. (Note: Apache does have a setting in Apache conf that set max children threads that should help avoid triggering the children processes check) In some cases the service will be shutdown if it is running hot for too long.

Conclusion

Keeping a daemon running and gathering information about it before something goes wrong is crucial in maintaining a quality application or service. Monitoring tools like 24×7 and Monnit make this easier and are a must on any IT toolbelt.

Manage Your Daemons With Upstarts

I am finding the need for custom Linux service scripts more and more. This is in the case where a program I want to run in the background does not already have one for one reason or another. Maybe I’m compiling instead of using apt, or sometimes I am creating my own app.

In the past I’ve used the traditional init script format that exists in /etc/init.d. This proved to be tedious. Unless I already had the init script on hand, I would need to code it out in bash script, like so:

#!/bin/bash
#
# Debug SMTP Service
#
case "$1" in
start)
if [ -f "/var/run/debugsmtp.pid" ]
then
    echo "Service already running"
else
    echo "Starting service..."
    python -m smtpd -n -c DebuggingServer localhost:25 &
    echo "$!" > /var/run/debugsmtp.pid
    echo "Service started"
fi
;;
stop)
if [ -f "/var/run/debugsmtp.pid" ]
then
    PID=`cat /var/run/debugsmtp.pid`
    kill -9 "$PID"
    rm /var/run/debugsmtp.pid

if [ ! -f "/var/run/debugsmtp.pid" ]
then
    echo "Service stopped"
fi
else
    echo "Service not running"
fi
;;
*)
echo "Service"
echo $"Usage: $0 {start|stop|status}"
exit 1
esac
exit 0

As you can see, there are sections to handle different commands, in this case it is just start and stop. With other programs there may also be restart, status and reload.

Start checks if the process is already running by checking if a PID file exists. If it does, then the service is assumed to already be running. If not, the script goes on to start the process and stuffs the process ID into a newly created pid file at the same location.

Stop is similar, as in it works on the PID file. It will check if the process is running by checking the pid file, and if it does, it will run a kill command on the process.

Phew.

Setting this up for multiple systems is a pain, and it is more technical than I would like to be doing on a routine basis. Specifically, managing processes with PIDs and using the kill command makes me a little nervous.

But there are other problems I could run into that would run into some shortcomings of init.d. One being the ability to base the daemon’s start on the network interface being up, or the filesystem being ready. This would apply to applications such as web and SMTP servers.

Enter upstart.

Upstart came into play in 2006 (at least in the Ubuntu world). It is a replacement of the previously mentioned init system where scripts are placed in /etc/init.d and /etc/rc*.d folders. It has a more accurate boot sequence through event based start up, and take less effort to implement due to some tasks like PID management being automatically handled.

Upstarts are configured with stanzas.  Two of those being start and stop, where you use runlevels or events (network up, filesystem ready) to define when the daemon should be started and shutdown. When it’s all set it looks like this:

description “nginx http daemon”

# Start daemon when filestyem and network interface is up
start on (filesystem and net-device-up IFACE=lo)

# Stop daemon when runlevel is not specified levels
stop on runlevel [!2345]

# Daemon binary location
env DAEMON=/usr/sbin/nginx

# Daemon pid location
env PID=/var/run/nginx.pid

# Indicate daemon has child processes
expect fork

# Restart if daemon ends prematurely
respawn

# Max respwans
respawn limit 10 5

# Commands to run before daemon starts
pre-start script
$DAEMON -t
if [ $? -ne 0 ]
then exit $?
fi
end script

# Run daemon
exec $DAEMON

Conclusion

So there you have it. Upstarts make life easier by cutting your daemon initialization scripts in half with more control, and less being in the weeds. It’s ideal for custom or compiled applications (packages often install their own init scripts) and removes the need to boot up a daemon every time you start your machine.

A S.O.L.I.D Design

As mentioned in a previous post, I just recently led the launch of a search focused web application. It’s time now to reflect a bit on the techniques and technology used.

From the start I was looking for a supporting framework that was conducive for rapid development that did not sacrifice stability,  integrity, or consistency. I wanted a set of tools that not only allowed our developers to quickly build out features, but also avoid getting stuck “in the trenches” building out core function. My approach to accomplish this was leveraging a collection of object-oriented  principles, frameworks, and debugging tools. I’m going to break this up between those three since they are each interesting and important, starting with the principles I used for the project.

Principles

For me, commonly accepted design theories are ideas put to practice that have been vetted and adopted by the consensus. While it’s still important to be innovative and be a free-thinker, I believe in standing on the shoulders of giants; building from what is known to work. This does not limit the ability to be creative or do something different, but instead empowers by laying out benefits and avoided caveats through comprehending the principles.

Object-oriented design can be nebulous and feel vague, but this is the spirit of object-oriented architecture. Concepts are  abstract and isolated, which allow them to be  independently combined to make a whole. What’s important to take away is they are a means to an end: a solidly structured application that can efficiently be extended and maintained.

With that said, it is time to go over the concepts I went with, which can be place into three groups: S.O.L.I.D, MVC, and ORM. Both MVCs and ORMs seem to follow the S.O.L.I.D pattern, so there is overlay, but that doesn’t mean they are required to follow any of the S.O.L.I.D principles. As said before, object-oriented concepts are intended to be independently applied.

S.O.L.I.D

S.O.L.I.D has been around for about a decade. Practice of these concepts are prominent in many areas of application development. The easiest to identify for me are Java frameworks such as Struts and Spring, but I can also see  partial application in front-end web development, starting with the separation of HTML, JavaScript, and CSS. More modern JavaScript frameworks have carried the torch, achieving full application, and in turn evolving web pages into full-fledged web applications. While the application I built was server-side oriented, these concepts may still be applied to front-end web applications.

To me, what this all means is you have a system comprised of objects that each have a unique role.  They play nice with each other, and don’t get greedy and take over another components role. This is an awesome design pattern, because it keeps roles encapsulated and extensible. It reduces the chances of rogue code lying in wait. And by separating out roles and keeping objects decoupled, it is much easier to build new features without modifying core code that would results in testing and bugs.

MVC

Organizing the three key layers of a web application is essential to to keeping an orderly, and reusable codebase. Logic can be separate into at least three basic “buckets” by role: Model, View, and Controller. This structuring isn’t meant to be taken as absolute, and does not directly translate into a specific file structure.  There is code that will fall outside the model, controller, and view role,  such as components that handle routing and security. Instead, like other OO principles, it is a set of guidelines to help achieve a better codebase.

A basic web example of this is moving database queries into code that is under the model section of the application, and not mixing it in with HTML. Another example of  separating elements by role would be what became standard practice for front-end web development with the use of CSS, and external JavaScript files.

ORM

The interface in which data is accessed can dictate many design factors of the web application. An ORM allows data to be accessed and manipulated as normal objects. All business logic is encapsulated within the data object, instead of being strewn about the application. This means that data is accessed consistently throughout the application through a set of centralized objects and tools . While this concept introduces a level of complexity compared to straight queries or a light wrapper, the ability to work with data as a collection of objects is very powerful and clean.

Summary

This project was a test of how necessary it was to create a S.O.L.I.D application. The organization of code, and separation of roles had enormous benefits that kept me sane. Our course was not without trial and error. There were times these principles weren’t followed, and it resulted in fragile and inflexible components that haunted us later in the project.

There is a quote that I like that goes “There is no problem in computer science that cannot be solved by adding another layer of indirection, except having too many layers of indirection”. There is a conundrum of simplicity vs extensibility. Adding that extra layer all depends on what the desired endgame will be for the application, and it can be a difficult  judgement call.

PHPUnit with YAML

After diving into some database integration testing, I found that my data model was incompatible in a XML format. Using hash tags is a no-no, as it’s against XML specifications. Well darn, what is a coder to do? YAML to the rescue.

First, what is YAML? Besides it being fun to say, it’s a “human friendly data serialization language”. HFDSL doesn’t sound as cool as YAML, and just like PHP, you shouldn’t look to far into the acroymn, less you are fond of infinite loops.

“YAML is a recursive acronym for “YAML Ain’t Markup Language“. Early in its development, YAML was said to mean “Yet Another Markup Language“,[3] but was retronymed to distinguish its purpose as data-oriented, rather than document markup.” – Wikipedia

Fantastic, moving on…

PHPUnit uses YAML as a supported format for creating datasets. Tables, columns, and rows are split up using colons, dashes and spacing to define your database objects. For example, let’s say we have a dataset of books.

books: #table name
- #begin a new record, followed by key: value pair
id: 1
title: Moby Dick
author: Herman Melville
-
id: 2
title: The Hobbit
author: J. R. R. Tolkien

This will add two rows to the table books with its respective data.

YAML can definitely be easier to read in some scenarios, as there is less to read. After using it for a couple months, it has kind of grown on me as well. It is a nice format to use  for config files when XML is overkill.

Unit Testing: Advanced

I decided to hunker down and get familiar with unit and integration testing a while back, and have finally reached a point where I’m using it with production code. I started off with the base understanding that unit testing’s purpose was to test code, but wasn’t sure in what way. In this post I will give a brief overview of what I setup and helpful tidbits I picked up along the way.

Unit tests are commonly written to ensure a unit of code (ex. class or function)  work as intended. There is also integration tests, which is meant for groups of units or “black box” integration, such as a database where the interface is being tested, and we don’t care about anything beyond that.

Prerequisites

For database integration testing, there is an extra prerequisite, the mySQL database and PDO library.

  • PHP
  • PEAR
  • PHPUnit
  • mySQL
  • PDO Library
PHPUnit uses the PDO library to handle setup and teardown (cleanup) of the database, and other PDO supported databases can be used in place of mySQL. A database is not required if you are not doing database integration.

Setup

File Structure

In addition to the basic file structure described in the first installment, there are a few additional parts to the structure.

Folders that contain PHPUnit setup files are prepended with an underscore to separate those from the test files. As you will see, there are two classes folders, one with the underscore and one without. Folders with underscores are used to configure or extend the framework and do not contain any tests.

In addition to the specially named folders, there are two files used for setup: bootstrap.php and configuration.xml.

Bootstrap

The bootstrap runs before the tests, and intended to setup the global environment. This is not to be confused with setting up test specific items that are meant to be sanitized for each tests, that logic should be placed in the setup and teardown functions which will be discussed later.

For my project, I needed to alter the include path and autoload method. Setting the include path was tricky, as they are relative to wherever the unit test is being executed from, and this may vary if your tests don’t all live in a single folder level.

Additionally, my PHP classes have a different file naming convention from the norm. For this reason I added logic in the bootstrap to handle the include paths by using spl_autoload_register. The native function file_exists does not automatically check the include path, so I broke support for the normal naming convention by doing this, but have not run into a problems (yet!).

Note: When overriding __autoload, spl_autoload_register should be used instead.

To tell PHPUnit to use your bootstrap, you must use:
–bootstrap /path/to/bootstrap.php

Configration File

PHPUnit has the ability to load an XML configuration file. Settings in that configuration file are loaded into a global, which can be used to access that data.

Example:

<?xml version="1.0" encoding="UTF-8" ?>
<phpunit>
    <php>
        <var name="DB_DSN" value="mysql:dbname=dbname;host=localhost" />
        <var name="DB_USER" value="dbuser" />
        <var name="DB_PASSWD" value="dbpass" />
        <var name="DB_DBNAME" value="dbname" />
    </php>
</phpunit>

Database Setup

Tests will need a stable database environment to execute predictable tests. While there is some setup needed to get PHPUnit to work with your database, there are some built in functions to return the database back into the state before each test was run.

I used the configuration file to load my database credentials, as shown in the PHPUnit documentation.

Test Setup and Tear Down

In addition to the standard setup and teardown methods, there are additional methods to handle database connections and data cleanup before each test.

The setup/tear down process with a database is a little more complicated then that with PHP objects. Each test should begin and end with a clean slate, therefore truncating the database tables is run before anything else. The truncation is done automatically, however the system needs to know what it’s connecting to, and what it’s touching.

getSetupOperation

There is a caveat with the truncation process with at least mySQL ver 5.5.16 , which is dealing with foreign key constraints. When PHPUnit sends the command to truncate a table, MySQL will have nothing to do with it when the constraints are in place, and the request errors out. Luckily, I was able to find some code that overrides this behavior so that we can move along to the next thing.

getConnection
A method for obtaining a database connection. This can be either a new one or existing. It is here that the settings from configuration.xml are used, as hard coding credentials gets messy.

getDataSet
Now that we have our connection established, the database needs some data to test. getDataSet is intended just for this, and can be used to insert data from a source into the database.

Stubs and Mock Objects

Stubs and Mock objects are useful for testing out variations in input/output, and assuring that methods within a class are being called appropriately.

Definitions taken from PHPUnit:

Stubs – The practice of replacing an object with a test double that (optionally) returns configured return values is refered to as stubbing. You can use a stub to “replace a real component on which the SUT depends so that the test has a control point for the indirect inputs of the SUT. This allows the test to force the SUT down paths it might not otherwise execute”.

Mocking – The practice of replacing an object with a test double that verifies expectations, for instance asserting that a method has been called, is refered to as <em>mocking</em>.

Testing classes that take a parameter during __construct and then run methods on it that require mocking took a couple extra steps to test out. For example, let’s say we have this class:

class Foo {

    public $myVar;

    function __construct($myVar) {

        $this->myVar = $myVar;

        $this->doSomething();

    }

    function doSomething() {

        // Code

    }

}

function testFoo() {
    $stub = $this->getMock('Foo', null, null, true);
    $stub->expects($this->once())
        ->method('doSomething')
       ->will($this->returnValue('foo'));
    $stub->__construct('bar');
}

First, the constructor needed to be disabled, and then mocking of the method needing testing.

Another scenario I ran into was testing out an abstract. An abstract cannot be created by itself, it needs another object to use it and define the abstract functions. PHPUnit allows for the mocking of abstract classes so that “concrete methods” can be tested out.

One last note:

According to PHPUnit documentation: “finalprivate and static methods cannot be stubbed or mocked. They are ignored by PHPUnit’s test double functionality and retain their original behavior.”

However, there is a work around for private/protected methods and attributes.

Conclusion

So there you have it, a brief look at PHPUnit.

Unit Testing: Basics

I decided to get serious with unit/integration testing, and have found the practice to be well worth it. I started off with the base understanding that the purpose of unit testing was to test code, but wasn’t sure in what way. I dove in, and here I am now documenting my findings.

I have broken up this subject into two parts; basic and advanced use. Basic is theory and simple unit testing information, while advanced covers globalized test configuration and setup, and database integration.

In the development life cycle, unit testing can exist before or after coding, depending on which development approach is being taken. Some are in the practice of creating unit tests before they begin generating code for their application, while other models have unit testing as a post-coding task. While the variations of development models is interesting, it falls outside the scope of this post.

Unit tests are commonly written to ensure a unit of code (ex. class or function)  work as intended. There is also integration testing, which is typically done after unit testing, and is meant for grouped units of code or “black box” integration, such as a database where the interface is being tested and not the functions behind that interface.

In addition to ensuring code is working as desired, tests act as documentation, which contributes to the often neglected documentation step of the development life cycle.

Now that we have an understanding as to what these tests are and what they’re meant for, let’s create a test.

Prerequisites

My environment was setup with:

  • PHP
  • PEAR
  • PHPUnit
PHPUnit doesn’t necessarily need to be installed from PEAR, although that is how I did it.

Creating a Test Class

Folder Structure & Naming Conventions

Tests typically reside in a separate folder at the root of the project, this way the files are isolated from the production code base, and it is easy to exclude test files.

File and class names follow a *Test.php naming convention.

For function names, they should either follow test*, or use the @test annotation.

Examples:

public function testNullTypeInConstruct() {
    // Code here
}


/**
* @test
*/
public function functionToTestSomethingElse() {
    // Code here
}

Writing out a descriptive name for each test method makes it easier to keep track of what does what from a glance, especially when using features like testdox, which changes method names into a human readable format.

Skeleton Generator

A “skeleton” test class with can be created automatically based on an existing class by using the skeleton generator. This does not generate all test functions in one shot, but does get you started off on the right foot.

Test Setup and Tear Down

Maintaining a sanitized testing environment is key. PHPUnit includes methods for setting up and removing remnants of  previous tests. The two main methods are named setUp and tearDown.  The names are self-explanatory, setUp runs prior to each test and is intended for initializing variables for each test, while tearDown is for clean up.

Non-global variables are wiped out after each test, so PHP clean up requires little to no effort, as variables are automatically moved into garbage collection and dumped at the end of the script. Only in resource intensive tests should there be any need to consider extra steps in clean up.

Annotation

Annotations can be made to alter the behavior of a test function, such as indicating an exception is expected to be raised in a test. For example, ExpectedExceptions can be noted by annotation or a set function.

Annotations live inside PHP comments. There was an oddity I came across, which is worth noting. I had to use a double asterisk at the beginning of the comment. So I make sure to use /** instead of /* to begin my comments.

Asserting

Here we have the test itself, everything else was the setup. An assertion is a declaration that something should be as defined, or in the case of expected exceptions, it should not be. There are a wide variety of predefined insertions that can be used on booleans, strings, objects and arrays. A full list can be found in the PHPUnit documentation.

Putting It All Together

class MyClass {
    public $foo = 42;
}
require_once dirname(__FILE__) . '/../../classes/MyClass.php';

class MyClassTest extends PHPUnit_Framework_TestCase {
    protected $object;

    protected function setUp() {
        $this->object = new MyClass;
    }

    protected function tearDown() {

    }

    function testFoo() {
        $this->assertEquals($this->object->foo, 42);
    }

}

What’s next?

Here you have information to build a very simple unit tests. Methods can be tested to assure they behave correctly and return expected data regardless of input. In the next post, we take a look at advanced features and working with integration testing.

Shorthand Javascript Techniques

(This is for coders familiar with the JavaScript language. Information on more basic JavaScript usage can be found at sites like w3schools.com.)

Summary

Keeping code standardized can be made easier through JavaScript shorthand. Here we will be looking at several techniques that will make code more readable, flexible and overall easier to matain.

Declarations
Conditions
Mathematic Operations
Anonymous Functions

Declarations

Variables

Instead of declaring each variable individually, they can be placed on the same line with a comma separating them.

var foo1, foo2, foo3, foo4;

Arrays:

var foo_arr = ['bar1',  'bar2',  'bar3', ' bar4'];
window.alert(foo_arr[0]); //bar1

Objects:

var foo_obj = {name:  'bar', some_prop: 'test'};
window.alert(foo_obj.name); //bar

Conditions

condition? true_logic :  false_logic;

Shorthand conditional statements can be coupled with an assignment, allowing  a single line of code to be used for simple conditional logic.

var foo = bar? 1 : 0;

Assigning a default value when encountering an empty variable (null, undefined, 0, false, empty string) can also be shortened from this

if(foo) {
    bar = foo;
} else {
    bar = 'Default Value';
}

to this

bar = foo || 'Default Value';

Mathematic Operations

Handling math operations can be shorted by using the following syntax:

foo = 5;

foo ++; // Increase by one

foo --; //Decrease by one

foo -= 2; //Decrease by two

foo += 2; //Increase by two

foo *= 3; //Multiply by three

foo /= 2; //Divide by two

Anonymous Functions

Assigning a function to a variable:

var my_func = function() { alert('Hello World') }
my_func;

Inline anonymous functions:

var foo = {
    name: 'bar',
    fnc:  function() {
        alert('Hello World');
    }
};

Common Usage

Combining the use of shorthand results in blocks of code that appear more elegant and easier to read.

var foo = [
    {name: 'apple', prop2: 'test2'},
    {name: 'orange', prop2: 'test2'}
];

Instead of:

function fruit(type, cultivar)
{
    this.type= type;
    this.cultivar = cultivar;
}
var foo = Array(
    new fruit('Apple', 'Fuji'),
    new fruit('Apple', 'Gala')
);

As you can see, using shorthand removes “noise” and reduces the amount of written code, both of which make for easier reading. When rummaging through scripts that contain several hundred lines of code, removing any excess code makes life easier.