Posted on September 17, 2012.

If you are not familiar with XBMC add-ons, they are python scripts that interface between a website and XBMC. They either interact with a website API or they scrape HTML pages and then pass the video content to XBMC via its python bindings.

Since most add-ons are basically website scrapers, this means they tend to break somewhat often. Currently, when an add-on breaks I’m notified in a multitude of ways including email, forum posts and private messages, github issues and mailing list emails. This presents a problem as I sometimes find out my add-on is broken a week or two late!

The answer to this problem is automated testing. This post will cover basic examples of unit testing and integration testing. Our unit tests will run every time we make a commit, but will not interact with any external APIs or websites. Our integration tests will run every commit but also on a daily schedule. The purpose of the IT tests is to verify that our code works properly with the website or API (and therefore that the remote source has not changed).


This post assumes you have a basic knowledge of developing XBMC add-ons. We’ll use my xbmc-vimcasts add-on as an example in this post. This add-on uses my xbmcswift2 framework, as it facilitates easier command line execution and testing. To run our unit and IT tests in an automated fashion, we’re going to use Travis CI since they offer simple github integration.

Writing our unit tests

I like to keep my add-on tests in resources/tests. I also use python -m unittest discover to run the tests, so make sure to create resources/tests/ as well. Let’s create our first test file, resources/tests/

import sys, os
import unittest
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))

from addon import  plugin, strip_tags, unescape_html, clean, get_json_feed, index

Since XBMC add-ons aren’t installed python packages, we need to do some path trickery in order to import our add-on as a module (This is why xbmcswift2 recommends using the if __name__ == '__main__' guard in your

There are a few basic functions in that don’t go out to the remote API. These are perfect for unit tests.

We’ll create a class to hold all of our unit tests:

class TestNonViews(unittest.TestCase):

    def test_strip_tags(self):
        known_values = [
            ('<b>Hello</b>', 'Hello'),
            ('Hello', 'Hello'),
            ('<b>', ''),
            ('<b><a href="#">Hello</a></b>', 'Hello'),
        for inp, expected in known_values:
            self.assertEqual(strip_tags(inp), expected)

    def test_unescape_html(self):
        known_values = [
            ('&gt;jon&dave', '>jon&dave'),
        for inp, expected in known_values:
            self.assertEqual(unescape_html(inp), expected)

    def test_clean(self):
        known_values = [
            ('<b>jon &amp; dave</b>', 'jon & dave'),
        for inp, expected in known_values:
            self.assertEqual(clean(inp), expected)

At the bottom of our test file, we need to include the call to execute the tests:

if __name__ == '__main__':

Now, let’s run our tests using python -m unittest discover. You should see output similar to the following:

(xbmc-vimcasts)jon@lenovo ~/Code/xbmc-vimcasts (master) $ python -m unittest discover
Ran 5 tests in 2.951s


Now that we have working unit tests, we should make it a habit to always run the test suite before committing any changes.

Writing our IT tests

Our IT tests will be very similar looking to our unit tests. The main difference is that these tests will be crossing the network boundary, and the scope of the test will be much larger than a single function or “unit”. In order to test that our code handles the API correctly, we basically end up testing the remote API as well!

::: python
class ITTests(unittest.TestCase):

    def test_api(self):
        resp = get_json_feed()
        self.assertTrue('episodes' in resp.keys())
        self.assertTrue(len(resp['episodes']) > 35)

    def test_index(self):
        items = index()
        self.assertTrue(len(items) > 35)
        expected = {
            'info': {
            'plot': u'Vim\u2019s list feature can be used to reveal hidden characters, such as tabstops and newlines. In this episode, I demonstrate how to customise the appearance of these characters by tweaking the listchars setting. I go on to show how to make these invisible characters blend in with your colortheme.\n'},
             'is_playable': True,
             'label': u'#1 Show invisibles',
             'path': u'',
             'thumbnail': u''
        self.assertEqual(items[0], expected)

For the assert statements in our IT tests, we must find an equilibrium in how specific our tests are. We don’t want to have to update our tests if a website posts a new video. But, we would also like to be notified if they change the format of the website. If we are scraping content that is always changing, it will be hard to match exact string values and URLs. Typically, if I am expecting a list of items as a response, I try to assert using > or < rather than == when testing the number of items returned. For instance, there are about 36 videos at the time of writing this post. So I want to verify that there are at least 35 items in the response which leaves a little wiggle room. If more videos are added, I don’t want my test to fail, so I won’t test for an exact number of items. If there are 0 or 1 videos, something is obviously wrong and the test will fail. However, if just 1 video is removed for some reasons, the test should still pass.

In the above example, I expect the order of the returned items to always be sorted, so I can verify one of the items from the list.

Travis integration

If you use github for your add-on, you have the ability to use Travis CI to run your test suite after every commit. See getting started for instructions on setting up your repository for testing.

The simplest .travis.yml file for our add-on is:

language: python
  - "2.7"
# command to install dependencies, e.g. pip install -r requirements.txt --use-mirrors
install: pip install --use-mirrors xbmcswift2==0.1 beautifulsoup
# command to run tests, e.g. python test
script: python -m unittest discover

Once you’ve enabled Travis, you can go ahead and commit the .travis.yml to your repository. If everything goes well your tests should execute automatically (and hopefully pass).

Running tests daily

Now we have the ability to automatically run our unit/IT tests after every commit. Our unit tests only need to be run before each commit, once the code works, it works for good until changes are made. However, to be effective, our IT tests should run at least once a day.

There currently is not a scheduling feature for Travis CI, but with a little work we can get close. If you go to your repository admin page on github and select the Travis hook, you’ll notice a test hook button. If you click this button, your Travis tests will be run with the current branch’s latest commit. The test hook button can be activated via the github API. So, we’re going to set up a cron job that will activate the hook once a day.

I wrote a simple script, githubhooks, that allows you to list and test your activated github hooks. You can install it with pip install githubhooks. You’ll also need your github OAuth token, which can be created by following these directions.

Once you have your token, you can set it as an environment variable GITHUB_TOKEN, or pass it as an argument to Each repository hook has an id, which we’ll use to test the hook. To list our hook ids by repository, we’ll run the following:

(github-hooks)jon@lenovo ~/Code/github-hooks $ list
.. Listing Hooks by Repository ..
xam:414859 (travis)
xbmc-vimcasts:416593 (travis)
xbmcswift:78826 (twitter)
xbmcswift2:239826 (readthedocs)

Now, let’s kick off a hook:

(github-hooks)jon@lenovo ~/Code/github-hooks $ --hook xbmc-vimcasts:416593 run
Triggering hook travis for xbmc-vimcasts... OK

You can verify on Travis CI that your tests were executed. The final step is run the test command via cron every 24 hours. If your tests fail for any reason, Travis will send you an email. Now you can rest easy, knowing you’ll be notified in less than 24 hours if your add-on breaks (assuming you’ve written good tests)!