Meteor 1.3 Official Testing Support - First Impressions

Meteor 1.3 has dropped and with it comes the highly anticipated official answer to the testing story. This post will outline a few first impressions on the implementation. This post will not rehash the details or steps outlined in the Meteor Testing Guide.

Goals in a testing solution

Before we dig in, it’s important we set a context for what we’re looking for in a test solution. In our use of tests on our projects, there are a few things that are important to our productivity:

  • Ability to write tests at varying levels of isolation (unit, integration, end-to-end)
  • Tests run quickly, milliseconds rather than minutes
  • Tests can run from the command line

This is the perspective that I’ll be coming from when talking about the Meteor 1.3 testing support. We’ll touch on each point in following sections of this post.

Meteor’s Test Modes

Meteor 1.3 supplies us with two application test modes; test mode and full-app mode.

Test Mode

According to the Meteor Guide, this is the primary way that we’ll be testing our application. It loads the app in a special state as follows:

  1. Doesn’t eagerly load any of our application code as Meteor normally would
  2. Does eagerly load any file in our application (including in imports/ folders) that look like *.test[s].*, or *.spec[s].*
  3. Sets the Meteor.isTest flag to be true.
  4. Starts up the test driver package

This configuration of the app makes the files formatted with names of *.test[s].* or *.spec[s].* the primary executables. They are loaded with their dependencies and nothing else. Each test file would therefore import the modules they want to test, and describe the tests for those modules.

This is a great improvement over the out-of-the-box testing experience of past Meteor versions! Up until this point, the most prominent testing solutions have centered around end-to-end and heavy integrations-style tests (or package testing). With the introduction of module imports and a sound application file structure we gain the ability to pull in only the code we want to test. We can then use standard unit testing methods to replace it’s usual dependencies with mocks and stubs, and exercise the code in isolation to verify it’s behaviour. A true unit testing implementation!

Because we are not limited to what our test files import, we have the ability to build integration tests between multiple pieces of application code. Going too far and including too much of your application in a single test is a subject for testing best practices, and while it’s a slippery slope that each project will need to navigate, the fact that it is now as possible as unit testing in Meteor 1.3 is another win for this version.

Full App Mode

Full App mode is the other testing state provided by the latest Meteor release in which:

  1. It loads test files matching *.app-test[s].* and *.app-spec[s].*
  2. It does eagerly load our application code as Meteor normally would.

This configuration allows for your tests to execute in the context of a running app, verifying sets of behaviour that spans both client and server sides of the application,

There are just some things that are best tested in the context of a running app but where UI-driven end to end tests would be too costly (in run time and maintenance). This is where tests written for the full-app mode will shine. Tom Coleman has written a short article on full app mode. Some of it is a rehash of full app section of the guide. While the article suggests that this is whole new style of testing which I would tend to disagree with, it does present an interesting idea that this also enables ‘white box’ acceptance tests. Using Cucumber you can (and should) first drive your tests from a services/api layer as white-box tests as well. That being said, the Meteor 1.3 full app test mode is going to be a valuable option in ensuring our apps are tested in context.

About Test Driver Packages

Currently, the meteor test command requires a driver package to be specified at run time. This driver package has 2 responsibilities: it launches a mini application that runs your tests, and outputs the results to some kind of interface. Currently, test driver packages are grouped into the type of output they provide: web reports vs. console or command line reporters.

The test runner recommended by the Meteor Guide is Mocha. The associated driver packages for running tests written in Mocha are practicalmeteor:mocha for web reporting and dispatch:mocha-phantomjs for console reporting. The examples in the guide and the associated Meteor Todos sample application are all written in the Mocha syntax.

Day-to-day development process is expected to use the practicalmeteor package to provide web report view of your test suite. The prescribed usage is to run the meteor test command on a second port so that you can continue to work with your app in browser while monitoring the state of your tests in another. While this seems like an ideal workflow, it may not be the viable solution for projects that require Continuous Integration.

Important note on recommended packages: If you want the command line feature for CI, you’ll have to remove practicalmeteor:mocha in order for dispatch:mocha-phantomjs to work properly. The two Mocha driver packages are mentioned in the Guide in separate sections outlining separate usage. It read to me like these two packages would play nicely together so that I could have the web reporter on dev and only use the console on the CI server. Unfortunately, that’s not the case, so it’s good to be aware.

If you’re one of many others who have used or prefer Jasmine as the test runner, a recent update announced that sanjo:jasmine is ready for 1.3. Be forewarned though, that in the announcement they’ve specified that CI is not yet supported by this package, so it may not be feasible for those that require continuous integration.

Regarding the speed of tests

Because the driver package is still a mini app, and it still boots up in a meteor context, there is an up-front cost to kicking off the meteor test command. Once the web reporter package is running and watching the codebase, however, the test results refresh fairly quickly as the code is updated. There are some occasions when the test driver just ends up in a bad state and the process needs to be killed and restarted. In conclusion, the tests _do_ run relatively quickly, but each instance of this quit-and-restart will add to the overall run-time of tests over the course of a project. From someone used to being able to run individual tests instantly in a console, this may seem fairly long. But as an initial start to testing support it is acceptable.

As it stands, the recommended packages present a good, well supported, and well documented starting point.

Meteor’s Testing Guide

The Meteor Guide has been mentioned multiple times in this writeup for good reason. The Testing section, as with the entire guide, is a reasonable introduction to testing in general and a great Meteor-specific resource. The guide clearly presents different areas of concern around testing and provides specific examples of how to handle them in the context of the Todos sample application.

The sections defining unit, integration, and acceptance tests are concise and the accompanying examples provide a lot of clarity and boilerplate that can be applied to any meteor application. Another strong point of the guide is that it also touches on subject matter needed in properly writing and maintaining tests. Specifically the techniques and tools for isolating tests, generating test data and cleaning up after themselves, or mocking data as an option are all briefly addressed.

A relative novice to testing could spend a lot of time studying this guide, in conjunction with the Todos app, and and use it to build a sold basis for testing that they can apply to their project and build on further.

Continuous integration

The last bullet in the Goals in a testing solution section was “Test can run from the command line”. This requirement, while helpful during development to some (like me) depending on their workflow, is specifically so that we can run our tests on a CI server.

dispatch:mocha-phantomjs provides us with the driver package that we can trigger from the command line. And setting it up on Semaphore CI (our choice for continuous integration) is a relatively simple process:

The ability to run the test suite on a CI server ensures that we are always running tests for every push and merge on the repo. It’s another quality gateway that lets us deploy code to staging and production with confidence.

Final Impressions

Overall, version 1.3 of Meteor provides a huge improvement over what was previously possible in the testing space. Of the three benchmarks we started with at the top of the post, all of them are served well enough by this release (speed could be better? We’ll explore in future posts). The support provided for the testing process in this version’s implementation, the meteor test and meteor full-app modes themselves, as well Meteor Guide’s section on Testing will all play important roles in building a strong testing culture in Meteor.

In upcoming posts, I’ll start to dig into other aspects of testing; We’ll talk about Cucumber and chimp to support end-to-end tests. We’ll review acceptance tests, their role in the workflow and how to use Cucumber (and maybe full app mode) to support them. We’ll talk about the difference between acceptance tests and end-to-end tests. We’ll take a concrete look at a BBD workflow in meteor, leveraging the newly available test modes. These things and more, stay tuned!

Let's stay connected. Join our monthly newsletter to receive updates on events, training, and helpful articles from our team.