Fixing Docker on Linode (Linux v4.18.16)

This week as I had some downtime after PyCon Canada, I started working on resolving all the issues that I postponed. One of these issues involved applying security updates to my Linode server and rebooting the server. However when I did so… I noticed that the Rookeries site went down. When I logged into the server, I quickly found the problem: Docker refused to start after the kernel updates.

As this bug report on Docker for Linux says, there is an issue with the latest Linode kernel when it comes to OverlayFS.

This causes the containerdservice that docker-ce is dependent on to not start. When looking at the logs (using sudo journalctl -xe), you’ll see an error along the lines of:

modprobe: ERROR: ../libkmod/libkmod.c:514 lookup_builtin_file() could not open builtin file '/lib/modules/4.18.16-x86_64-linode118/modules.builtin.bin'
modprobe: FATAL: Module overlay not found in directory /lib/modules/4.18.16-x86_64-linode118)

Thankfully there is a workaround to resolve this problem. From the instructions you need to an override configuration for the containerd service:

$ mkdir -p /etc/systemd/system/containerd.service.d/
$ cat << EOF > /etc/systemd/system/containerd.service.d/override.conf [Service]
ExecStartPre=
EOF
$ systemctl reload
$ systemctl restart docker

Anyways, I hope this helps if you run into the same situation.

Surfacing after PyCon Canada 2018

Uff! What a weekend! For the past few weeks, I did not write any updates because I helped organize PyCon Canada 2018! And present a talk… because apparently life is not exciting enough to do one or the other… Actually these next 2 days I am coordinating the coding sprint portion of the conference which I very much look forward to!

It was totally worth it. Even though I could of used more sleep, the conference went off without any real problems. I met a number of very interesting people, and while I didn’t get to see many of the talks, I thoroughly enjoyed helping out. Big thanks to all the organizers, sponsors and volunteers for making PyCon Canada 2018 a hit!

And I’ll be back to a (semi-)regular schedule of blogging in the next few days.

To Make or Not to Make – Using cargo make for Rookeries v0.12.0

I was pleasantly surprised when my last blog post about migrating to Rust’s integration tests really took off on Twitter. I did not quite expect that much interest. 🙂

Using cargo-make

I recently continued with my exploration of Rust through Rookeries (my attempt at a static site generator/backing API server). This time I worked on switching over from using invoke and GNU make to using a nice build system called cargo-make. Overall I am quite happy with the result. To give you a taste of how cargo-make describes build tasks and environment variables, I included a short snippet from the Rookeries Makefile.toml below:

<br />
[env]
SERVER_PORT = "5000"

[tasks.build-js]
description = "Build JS frontend."
command = "npm"
args = ["run", "build"]

[tasks.run-dev-server]
description = "Run a Rookeries dev server."
command = "cargo"
args = ["run", "--", "run", "${SERVER_PORT}"]
dependencies = ["build-js"]

Much like invoke and Make, you can specify dependencies. Unlike Make, there is no weird tab-based syntax, or implicit behaviour that requires a look up in the GNU Make manual. Just a simple TOML file. Environment variables can be specified in standalone .env files, passed in or specified in the Makefile.toml itself, and allow for variables specified during a run to override defaults, etc. cargo-make also lets you use conditional logic but I did not need to use that for my purposes.

For Rookeries, I ended up creating locally running tasks, and versions for the Dockerized build setup that I have in CircleCI. Also I ended up using the scripting support that cargo-make provides for preparing the database for, and running the integration tests.

My Thoughts on CouchDB for Rookeries and Rust

I feel I cheated a bit towards the end especially by using a few curl commands to instantiate the CouchDB database, and with running the API integration tests. Maybe I should of wrote a proper CouchDB library and utility. However I decided against it, simply because I want move away from CouchDB for Rookeries. So I may not want to invest that much time in maintaining a proper database client if I’ll end up switching over to using Postgresql via diesel.rs Originally, I started using CouchDB because I use it heavily at my work at Points. As time goes on, I find myself more and more reluctant with using CouchDB, just because of all the issues I’ve experienced at work. That said, if someone wants me to maintain a CouchDB Rust library/CLI based on the code I have for Rookeries, then please reach out to me. The CouchDB crates I saw in crates.io did not inspire me, so I wrote my own CouchDB module in Rust. But I don’t want to maintain that code just for myself, so if anyone actually needs a proper CouchDB crate and wants me to maintain it, please reach out to me.

Update – Sofa Library

After I wrote this blog post originally, I did not know about this nice CouchDB crate called sofa. If I had know about it, I would of not bothered with my own implementation.

New Release of Rookeries – Now 100% Rust

Finally since this migration was a bit of work, so I decided to release a new version of Rookeries. Since the invoke setup was the last Python code, Rookeries is now 100% completely written in Rust. (And the JS frontend naturally.) The Rookeries Docker image is available on Docker Hub. The project is still quite in flux, and what I will work on next depends on what I need to do to hook up Rookeries to one of my Gatsby powered sites.

Writing Integration Tests in Rust + Releasing Rookeries v0.11.0

As part of my overall change over in Rookeries, from Python to Rust, I rewrote a suite of integration tests for the server API. To celebrate my successful transition, I released version 0.11.0 of Rookeries, whose tests use pure Rust now!

I found the rewrite not too cumbersome, thanks to the wonderful guide in the Rust book on integration tests. I did miss pytest’s fixture setup, which makes testing really easy. Especially when setting up fixtures that are run only once per test suite. That said, a single session setup makes it difficult to run tests in parallel. And while I ran into some inconsistent tests, I had the exact same kind of problems in Python. Basically my data model for Rookeries, doesn’t work well for singleton data like a single site. In the future, I plan on having a single Rookeries instance managing multiple different sites, so this point of not having a single test setup is ultimately moot.

Building a common setup module turned out to be more work than I imagined. I could not use macros because of the package setup. Also since tests are compiled individually, some methods in the common module, simply are not called, leading to warning of dead code.

What made the transition is easy was using the reqwest crate, and heavy use of serde_json’s json! macro. Comparing the Python and Rust version of the tests, makes them look very comparable.

Rust Version src:

#[test]
fn test_authenticated_user_cannot_modify_site_using_bad_request() {
    test_site();
    let api_base_uri = common::api_base_url();
    let client = Client::new();
    let auth_token = auth_token_headers();
    let response = client
        .put(api_base_uri.join("/api/site").unwrap())
        .body("")
        .header(Authorization(auth_token))
        .send()
        .unwrap();

    assert_bad_request_response(response);
}

fn assert_bad_request_response(mut response: Response) {
    let expected_response_json = json!({
        "error": {
            "status_code": 400,
            "message": "Bad request",
        }
    });

    assert_eq!(response.status(), StatusCode::BadRequest);
    let actual_json_response = response.json::<Value>().unwrap();
    assert_eq!(actual_json_response, expected_response_json);
}

Python Version src:

def test_authenticated_user_cannot_modify_site_using_bad_request(
        auth_token_headers, api_base_uri, test_site):
    response = requests.put(
        url=f'{api_base_uri}/api/site',
        data='',
        headers=auth_token_headers,
    )
    assert_bad_request_response(response)

def assert_bad_request_response(response):
    expected_response_json = {
        'error': {
            'status_code': http.HTTPStatus.BAD_REQUEST.value,
            'message': mock.ANY,
        }
    }

    assert response.json() == expected_response_json
    assert response.status_code == http.HTTPStatus.BAD_REQUEST

I am pretty happy with how everything turned out overall. My next bit of Rookeries work will be migrating away from invoke and make to cargo make.

Rookeries v0.10.0 – Rust Re-write

I just rewrote Rookeries in Rust, and the latest version is now available as a Docker image on Docker Hub. (This is why I have not responded to emails in a bit… I’ve been doing a lot of thinking of what I want to do next.)

So why a rewrite? Ultimately I decided to change the direction of Rookeries as a project (making it more of a static site generation tool + headless CMS). I also wanted to improve my knowledge of Rust. (A programming language that I believe is quietly revolutionizing computing bit by bit. I really need to write about it sometime) But to show that there is reason for my madness:

Traditional CMS

I realized that my approach to Rookeries was heavily inspired by WordPress. And WordPress represents the best in class (in terms of ease-of-use) for traditional CMS. Traditional CMS being applications that control all the aspects of a website: pulling data that it organizes inside a database, the applying logic to manage the workflows of the data, and the creating and managing the user interface that ultimately shows the data. The traditional CMS has to do a lot of things. That usually means that the CMS developer needs to add a lot of assumptions and constraints to ship said CMS. And when those assumptions no longer hold, it takes a lot to change the codebase.

In the case of WordPress (and similar styled CMSs), the server has to do a lot of work with building a page. A WordPress site consists of both the site, and its admin console. While there has been work on updating the theme management and the page/post editor, ultimately there are limits to what kind of UIs that WordPress can support. WordPress also has a massive plugin marketplace, because the core developers do not know what every website will need. (Nor does it make sense to build everything in or every possibility.) For a modern WordPress powered site to stand-out, it needs needs a custom theme and often a half-dozen plugins. Naturally having so many moving parts, from different vendors (some theme and plugin devs are better than others) means there is a a greater chance of security issues. The routine upgrade of themes, plugins and the WordPress core is a rite that one has to diligently perform regularly.

It would be impossible for Rookeries to compete with WordPress head on, just given the number of hours I would need to sink to make things a reality. Also I do not think this desirable given the current direction.

The Alternative of Static Site Generation and API Driven Sites

The assumption that the server must handle most of the layout and display issues does not hold when dealing with modern browsers. Modern browsers can run rich multimedia experiences and real-time applications. Running intensive applications such as a 3D game is now possible, and will become come place. Ultimately the web is now a platform, a target that application developers can directly target.

As a result, many web agencies are turning to the JAMstack and having their websites act as their own full-blown applications. The applications either talk to a headless CMS acting an API or even foregoing that with static site generation. Probably the best example of this is Gatsby for generating sites built in React, which I am currently using to build out the Amber Penguin Software site. I also plan on eventually converting my other existing WordPress sites to use Gatsby or a similar tool. These tools as other web agencies found, separate out the complex logic of a frontend app from the backend and its data. These means more flexibility when building out a site. It also means that the backend CMS can be hardened and simplified to avoid many security issues. In fact with static site generation, the web app can be so removed from its CMS and database, that attacking a site becomes irrelevant. Such a site can also have better performance, and take advantage of caching and CDNs.

The Future of Rookeries

While there are a number of headless CMS and static site generators out there, I feel like the overall experience of working with them needs some improvement. There are no real good headless CMS systems written in Python or Rust, and I do think there are some really unpleasant rough edges around running a static site. Hence I plan on reworking Rookeries to help me build a CMS that can power my various sites. At the moment, this version 0.10.0 has feature parity with the previous version of Rookeries. I still need to rewrite some of the admin and testing in Rust, to avoid the need for Python. Also I need to figure out a nice manner to expose a flexible API to my sites. Fortunately I have enough examples to keep me busy for a bit.

Qt + QML Experiments

At the end of last year, I took a bit of a hiatus from blogging and my other projects. I found that I needed a break from my current projects, and to work on something totally unrelated. One of the things I try to do each year is to challenge myself to learn a new skill. Early last year, I took to learning Javascript properly. Through my work at Points (especially on the redesign of the SouthWest Rapid Rewards inline Buy app) and on Rookeries, I got significantly better at Javascript and frontend web development. I also briefly played around with writing server-side apps in Node.

So I challenged myself in December to learn to develop apps using C++ and Qt. Using the Game Programming in Qt book and Cherno’s Learning C++ YouTube series, I managed to learn the basics of Qt and C++. In fact I was very pleasantly surprised. Learning C++ can be hard but I got it. Maybe as the cppcon 2015 talk about learning C++ points out, that learning C++ is hard, because learning old K&R style C first and then getting into C++ is hard. (Pointers are hard, especially if you get into pointer to pointer to array of pointers hell.) But it doesn’t have to be that way. The Qt book and working on visual and projects that I never imagined I’d ever work at, really motivated me to learn. Below I’ve included screenshots of some of the projects I worked on:



While I am far from calling myself an expert in Qt. C++ can be needlessly complex when use the old idioms and the community seems to be full of overly opinionated people, I am so glad I took the effort to learn C++. Lots of the memory management and macros in Rust, now make sense. Thanks to be Qt book, I understand whyy developers prefer embedding Javascript or Lua over Python in the game engines and embedded devices. (Writing a Python to C++ two-way translation layer is tedious. And this is without worrying about getting the Python standard library working as well.) Also I am pretty convinced that Qt with QML is the state of the art when it comes to UX development. (React and Kivy are far cries in comparison, not that either should be discounted as important technologies.)

Now I am inspired to delve more into native development. I plan on bringing back justcheckers as a Qt/QML based game. I will see if I can help out with some aspects of Qt, Rust and CPython. I am excited not only because I essentially achieved a New Years resolution in the first 2 months or so. But also because actual native development is now something I can do and understand.

Reflections between PyCons – PyCon US 2018 + PyCon Canada 2018

I had the opportunity to go to PyCon US this year again. And I’ve been meaning to put down my thoughts about it, but I have managed to distract myself in a million different ways. However I feel I really ought to say what I learned:

PyCon US 2018

I enjoyed PyCon US this year, especially the “hallway” track, and I feel I learned from talking with vendors and going to a few of the open spaces. One of the best parts was meeting up with friends, who I only get to meet at PyCons. But you’d have to be there to understand and experience that. Also I wish I could of stayed for the coding sprints this year. While I did not watch the talks live, I am compiling a list of the talks that I enjoyed from PyCon US 2018 here. Some of the more interesting tidbits I learned:

LinkedIn is experimenting with an interesting alternative to microservices. The best way I can describe it is a single containerized codebase, with multiple entrypoints that allow for different services to run under different operational parameters. The engineering teams get to benefit from code co-location that you get from monolith. And the operations team can run the codebase as separate services that can be tweaked accordingly.

Mozilla through their experiments with web assembly is looking to running native applications in the browser. Dan Callahan’s PyCon keynote on getting Python into the browser was amazing. There is a lot of potential for web assembly, but there are a lot of unanswered questions behind it still.

I was surprised by the number of vendors who specialize in various aspects of e-commerce, who were out an exhibiting their SaaS offerings. If someone were to start a company like Points, you could do so with a smaller number of people and outsource a large number of the “non-core” elements of your business. Also Pythonistas sure love their Postgres databases, because there was quite a number of vendors working in that space too.

I got to meet and talk with the awesome Reuven Lerner, who host the amazing Freelancers’ Show. And I learned quite a bit about consulting from freelancers, and consultants who came out to Reuven’s open space. Positioning in the market is crucial, as I expected. What I didn’t expect was that I seemed to have gained quite a bit of knowledge from listening to the podcast and reading various books. Alas, it is all still theoretical rather than practical in many cases.

Python’s killer application is data science, and there is a lot of interest and work in that area. Which means while web development in Python will stay around, I believe there will be less and less emphasis on it both in the community and in PyCon. There are a lot of new and interesting coming into the field from coding schools, so only time will tell how that works out. But if you are into analysis and data science, Python will help you shine.

PyCon Canada 2018

Speaking of PyCons, Elaine Wong, who is the chair of PyCon Canada 2018, convinced me to help run the sprints for PyCon Canada 2018. This should be interesting, since I’ve never organized an event like that. But I look forward to the challenge. If you are in Toronto, and interested in helping sponsor, volunteer at or host the sprints, please contact me!

Rookeries v0.9.0 out – New UI and Live Editing of Sites + Juggling JSON with jq Book Update

It has been quite a while since my last update. I apologize for the long silence. I wanted to focus on getting Rookeries up and running, to the degree that I can host a website. Namely I was hoping to update the Amber Penguin Software website and the Juggling JSON with jq book companion website to use Rookeries. Unfortunately I had to re-architecture the UI and bring the CouchDB management of the site and pages up to spec. However I am happy with the overall results:

Interactive Editing of Menus

Rookeries v0.9.0 introduces support for adding, removing and reordering menu items. It took a bit to find a good simple implementation of a drag and drop able list.

]4 Version 0.9.0 introduces interactive editing of menus.

Interactive Editing of Site Headers and Footers

Rookeries v0.9.0 also extends support of editing pages to site headers and footers. It uses the same inline editor for Markdown and HTML, that the pages currently enjoy.

]5 You can now edit site headers in Rookeries

So what else changed?

  • The migration of the UX over to a sliding sidebar is done.
  • Internally redux was replaced with MobX, which is far more maintainable for me.
  • The addition of saving sites and the current selected page.

There is still a lot to do on Rookeries, such as making the CMS more robust and flexible. Adding and removing pages is still outstanding. So is selection of pages via URLs. Marking whether or not changes have been made, to prompt the user to save. Also to check that the site is not in an invalid state. However things are starting to look up.

A few people have inquired about the state of the Juggling JSON with jq book, and when will a sample chapter and table of contents will be available. I apologize for not replying directly to those emails (I will do so shortly), but I hope to get those out in the next few days, and hopefully before the end of the year. Work on Rookeries has essentially ate up all the time that I would of used for working on the book. I think going forward I will work on both in parallel.

Using Blender for Cover Design

One of the fun parts of writing Juggling JSON with jq, is that I can experiment with various things. From the technical writing side, working with Sphinx has forced me to learn the ins-and-outs of that technology. Also very likely I will need to get into working with LaTeX for more of the advanced PDF parts.

The book cover provided me with an excuse to work on my graphic design skills. I decided that I wanted to do something more than just use a stock photo or old-timey engraving (a.k.a. O’Reilly’s book covers). Instead I decided to use Blender to create a compelling cover image. Yes, it may sound like overkill, however using Blender harkens to past times. Before I settled on studying computer science at the University of Toronto, and working as a programmer, I considered becoming a 3D graphics artist. I played with 3D Studio Max in high school, and learned about drafting and animation. When faced with the reality of being proficient but not amazing at drawing, and the very real possibility of competing with many more talented and experienced artists in the market I decided against that career path. However this decision did not dissuade me from taking drawing in university or enthusiastically learning Maya to make an animation for a visual computing course.

So I took the excellent Youtube video tutorial by Blender Guru. After a few hours of watching the tutorial I came up with a final image like so:

Blender Doughnut scene

] Doughnut and cup scene created in Blender. Created on August 14, 2017

And boy did I learn a lot from the tutorial: navigating Blender’s confusing UI, modeling, texturing, lighting and rendering a final image.

As for the book cover I am working on a cell-shaded, low-poly scene involving penguins and juggling. The process has been pretty fun so far, even though quite time consuming. However I have finished my low-poly model of a penguin, and added rigging (internal skeleton with joints for movement):

Low-poly model of a penguin

] Low-poly model of a penguin for the Juggling JSON with jq book. Created on August 23, 2017

Rendered pose of a juggling penguin

] Rendered pose of the juggling penguin for the Juggling JSON with jq book. Created on August 23, 2017.

Juggling JSON with jq is Out Today

Apologies for the late post but it has been a busy day. As of today you can buy the early edition of Juggling JSON with jq on Gumroad. I am very excited since this is my first attempt at self publishing a book. Naturally the journey is still continuing as I work towards the final release of the book. All buyers of the book will get updates including the final version when it comes out.

I plan sending out an update on the book and the companion site/API that I am working on.