PyCon US 2016 Talk – Pythons in a Container

At the end of May, I presented a talk at PyCon 2016, on using Docker with Python microservices. You can imagine the rush I felt getting to present on such a popular topic at such a large and important conference as PyCon! While it took me a while to recuperate after PyCon and Portland both of which were amazing. I would definitely do another talk at PyCon given the opportunity. Anyways I hope you enjoy watching the video of the talk! Below the video I also wrote about preparing for the talk, its reception, and a bit of the controversy that it stirred up below the fold. 🙂 (And I apologize for the lateness of this post, its been sitting in my backlog waiting to get finished for a few weeks now. 🙁 )

Video

Links

Abstract

Microservices and Docker are all the rage for developing scalable systems. But what challenges will you face when developing and deploying Python apps using Docker to production? This talk goes into the real-life lessons learned from creating, deploying and scaling Dockerized Python applications.

About the talk

Preparation for the Talk

PyCon talks definitely take quite a bit of time and effort to prepare. In my case, the talk took 3 major revisions before becoming the talk that I actually presented at PyCon. What started off as a intro to some of the concepts of Docker with some minor Python points, became more of a lessons learned targetted at intermediate to advanced developers. One of the things I wished I had (and I planned to but didn’t pull of) was to mention and thank my team for helping me preparing my talk. So thank you Kevin Qiu, Biniam Bekele, Yele Bonilla, and Gavin D’Mello for all your support, sitting me through three versions of my talk, and all the amazing feedback! I’ll make sure to include a slide with thanks next time.

Also I am very thankful for Jared Kerim from Mozilla, who presented at the local Python Toronto meetup about his team’s Docker setup. He was kind enough to let me use his example docker-django-template project. His example was also inspirational for my own docker-compose example using my CMS Rookeries. An example that I crafted and tested on the plane trip over to Portland. (After talking with other speakers, finishing up your presentation, notes and examples on the plane trip over is a proud traditional of PyCon and other conferences. :D)

Reception

Overall the reception of the talk was amazing! The talk turned out quite a crowd, in fact filling up most of the room. (I’m not sure of the capacity of the room but I estimate over 300 people attended). I was pretty nervous, but with the exception of a few stumbles, I think I pulled off the talk quite well. I really enjoyed some of the questions that were fielded during the Q&A session, and also privately afterwards. I wish could of answered some of the Docker Machine and Amazon ECS questions better, but I simply have not worked with both technologies long enough to give proper advice.

Controversy

The most surprising aspect of the talk was the controversy it stirred up. At the end of the Q&A you can hear some comments from a young lady about where I supposedly went horrbily wrong, and how there were tweets flying back and forth about it. I had turned off the notifications on my phone when I got up on stage, to avoid getting distracted. She persisted with telling (or trying to explain) what was wrong in the private gathering afterwards. Unfortunately she did not do a wonderful job of communicating, and I felt it took away time from others to ask their questions. It didn’t help her case that she admitted to being a novice at Docker. Please don’t that as an attendee, there are better ways to disagree and communicate that.

I later approached by a gentleman (thank you whoever you are), who mentioned I should go talk to the OpenShift guys since they had some concerns about my talk. News of the Twitter controversy worried me, because I hated the notion that I had gotten on stage and toled people to go and do the wrong thing. Especially when apparently I’m telling the opposite of what Glyph from Twisted said to do. After a brief chat (and a nice demo about their cool Kubernetes suite) from the OpenShift guys, I found out that Graham Dumpleton, the creator of mod_wsgi and who works on OpenShift had done a live tweeting commentary during my talk, where he disagreed with a few of my points. Long story short, eventually I was able to chat with Graham. He was a great sport and explained his points. Interestingly enough I had also talked with the folks at Docker. And they agreed with the points in my talk, and the logic behind my points. Essentially most of my points were based off the best practises they proposed.

Anyways I listed a few of Graham’s points with links to his blog posts (thanks again Graham!), and some of my quick thoughts on each one. A quick disclaimer about some of my points: the advice I gave worked for us in our datacentre, and that it might not work for others in other environments. It should work well, it might not be perfect, but it worked for us, and some of the folks at Mozilla. I gave a disclaimer at my other talk on a Ansible setup for WSGI apps at PyCon Canada, and I thought it was superfluous. But it turns out it is a useful thing to mention, and be explicit.

Errata

So the slide that caused a good portion of the controversy was the base image one. There I had provided an example Dockerfile on half the slide and discussed about base images and good Dockerfile practises, with points on the lower half. Now the example was meant as a toy and not necessarily complete. It is difficult, even impossible to present a well formated, perfect Dockerfile in that context. There is only so much room on a slide to fit both an illustrative example and some explanatory points. That is why I included links to some samples, that hopefuly did a better job of it.

Virtualenvs

Ah yes, the “enfant terrible” of my talk. 🙂 If you want to be controversial in your talk, mentioning something like this will get people’s attention. (Ironically, it was not my desire to stir up a controversy). Graham post a while back why you might want to use virtualenvs in your Dockerized app. It is a longish post, so I’ll give a shortened version. Basically when you base your image off some distro (say Ubuntu, Fedora or what not), there is a good chance of bringing in more Python packages in your system site packages than you expected. e.g. You’re building a Flask app, and the package maintainer included a version of Werkzeug in the base Python install, so now when you pip install Flask as part of your requirements you get the wrong version of Werkzeug.

And that is a valid point (with my example)… except if you use something like the official Python 2.7 base image… which installs just Python. I would argue that you would catch and resolve this issue, if you are auditing your Docker images. (And you should be always doing your due diligence and checking your base and resulting images. ) So yes… you don’t really need virtualenvs, but you can also use them if you are concerned that you might be getting conflicting packages.

Volume maps

Graham was right about the adding volume mapping in the Dockerfile being problematic. You should not define volume mounts in your Dockerfile, since they create extra files with sudo-like permissions on the host (see /var). In your own datacentre that isn’t a problem. A multi-tenant cloud provider like OpenShift, would disallow you to create those files. The documentation argument I provided is not all that useful, since you can document the mountpoints in the README that you would provide with the Docker image.

Base Images

Base images are hard to get right. And there is a lot of debate whether or not to use tooling instead of base images. Graham says his warpdrive tool will do that sort of a thing. At work we build out our own tooling for building “standard” service Dockerfiles, and that just add another level of abstraction. I prefer base images since it while not ideal, provides less levels of abstractions that can get in the way when you’re debugging your Dockerfile setup. But your mileage may vary here.

So yes, good base images are hard. Try not to build your own unless you find it really useful and you have a great base to work from.

Installing GCC/Build Tools

In an ideal world one ought not have to include GCC, Python dev headers and so on. Yes, one can pip install using wheels, but that doesn’t always work out.

Dockerfiles

Formatting of the RUN command. This is not one of Graham’s points, but it did come up. Yes, you should format the RUN commands, with a line for each command and using a \ line continuation for readability. My slide didn’t have enough physical space to do so. My Rookeries example does a better job of this.

Running as Root

Graham is right, you should not run containerized apps as root. That is a bad security practise that can lead to an attacker compromising your Docker host via a privileged account on your Docker container. Again a bad example on my part, I should of added a USER command and dropped the VOLUME line, or maybe rethought the use of an example.

UWSGI and the HTTP flag

No, you don’t need it and you should use the UWSGI protocol if you put an NGINX container before your WSGI container. I left the flag in to make sure the example Dockerfile was runnable. My bad on trying to get a good illustrative example, but it wouldn’t be a good idea in production unless you feel comfortable exposing UWSGI to the direct HTTP traffic.

Personally I’m not a fan of mod_wsgi + Apache, but Graham did point out he created mod_wsgi-express to simplify your life. If we continue to use Apache + mod_wsgi at work, then I’ll try to get us to use mod_wsgi-express too.

Final Thoughts

Anyways, I hope got everything right. Thank you for reading all the way to the end! 🙂

See You at PyCon US 2016!

If you’re wondering why I’ve been so quiet these past few weeks, it is because I’ve been busy preparing to go to PyCon US in Portland this year!

I am very excited not only to be attending, but I will be giving a talk at PyCon US this year! I will be talking about Dockerizing Python microservices, and some of the lessons we’ve learned along the way at work. My talk will be on the first day (Monday May 30th) at 3:15-3:45 PM (PST). Videos of the all PyCon talks should be available a few days after the talk.

Huge thanks to everyone at my workplace, Points, for making this possible for me!

Finally I will be around in Portland for a few days after the sprints as well. I have never been to Portland, so I want to check out some of the sights around there. Let me know via Twitter or email if you want to meetup with me while I’m there. 🙂

Taking a Break

Dear readers,

I was hoping to have a new entry for you this week. Unfortunately I am swamped with non-blogging work at the moment, and I need to concentrate on this for the next couple of weeks. So I’m taking a break from blogging for a bit. I should be back to my regular blogging schedule in the next couple of weeks.

Cheers,
Dorian

Through the Javascript Looking Glass Part 1: Finding ES6 + NodeJS Alternatives to the Python Stack

Apologies for missing last week’s scheduled post and being late with this week’s post as well. I’ve been putting off writing articles and refilling my queue, with other things that have been (or seemed) more important than blogging. Either way, I’ll try to fix this so that next week I’ll be back to my regular schedule. –Dorian

Introduction

These past weeks right before the start of the new year, I have been experimenting with something new. As part of trying to use server-side rendering for React client inside Rookeries, I decided to figure out how to achieve this using NodeJS. To kill a couple of birds with the same stone, I decided to use this as an opportunity to play around with ES6. In these next few blog post I will write about some of the lessons I learned along the way.

The Project and Its Architecture

To help me focus my learning, I decided to concentrate around a project that would provide a skeleton for my learning. I choose to recreate one of my earlier Flask project, which runs the Amber Penguin Software website. This web application acts as a cross between a static file website and a CMS, by serving template pages that render Markdown into the body of each page. The routing is a fairly trivial look up of flat files, and returning a 404 error page when a page is not found. The tech stack being Flask, a simple [J]inja2](http://jinja.pocoo.org/) template and Markdown as the Markdown rendering engine.

My project consisted of four phases:

  1. Recreate the current setup using a NodeJS tech stack,
  2. Add a simple JSON API to host the API and a simple React component that was “renderable” via the server and the client.
  3. Build out the React app to handle routing and retrieving the content of each page, with a first time server load and subsequent calls via Ajax calls to the JSON API from the frontend React components.
  4. Host the completed app using my existing Ansible setup.

Finding Alternatives

The first task consisted of figuring out what NodeJS technologies I could use to recreate the Python/Flask app. Turns out that the language specific communities in the web app world like to borrow heavily from each other. Just as Ruby’s Sinatra microframework inspired Python’s Flask, so did Node’s ExpressJS take notes from Flask. Jinja2 inspired Mozilla’s Nunjucks and a bunch of other similar templating libraries. (I ended up using Nunjucks since it is the most mature library) Marked replaced Markup. The tricky part was actually replacing Python’s io.open() to open files. With a bit of experimentation I figured out how to use Node’s fs (file system) module and its readFile() and readFileSync() methods.

In short I could translate the tech stack this way:

  • Flask ⇒ ExpressJS
  • Jinja2 ⇒ Nunchucks
  • Markdown ⇒ Marked

Next

Next time, I’ll go into the details of setting up the ExpressJS apps and routes.

DNS Woes

For the past couple of days instead of working on actual development work related to any of my projects, I’ve been transferring all of my domains from DreamHost, my old hosting provider to a new DNS provider. I was looking forward to a gentle switch over, to my new Canadian (eh!) DNS provider easyDNS. Unfortunately like many technical problems, I ended up spending more time and effort than I expected to originally. (Enough effort that I’m late with posting this blog update today.)

It turned out that DreamHost made enabling my desired setup real easy, and hid a lot of the technical difficulties of setting up DNS records. easyDNS is a lot more flexible, but then I’m not a DNS record expert so getting a similar setup was tricky. Fortunately the fine folks at easyDNS are really responsive by email, and a few emails back and forth we arrived at a setup that worked nicely. Most of this came down to not understanding the terminology and not checking the right places

DNS Record Terminology

A

This the main record that maps a domain name to an actual IP (v4 and v6) address. In my case this would be the IP address of the server hosting all my webapps.

my_domain.com ---> 123.456.789.10 (not a real IP that I own)

CNAME

Canonical Name or Alias, this is used to map a subdomain (e.g. www, app, etc.) to another part of the domain or another top-level address. This retains the subdomain name in the address bar of your browser.

www.my_domain.com ---> mydomain,com
mail.my_domain.com ---> my_email_provider.net

URL Redirect

Redirects (usually via a HTTP 301 Redirect) an address to another domain or location. (This turned out to be the option I needed for most of my sites) A redirect naturally will change the URL in the browser’s address bar.

my_domain.ca ---> http://my_domain.com/
my_nifty_sideproject.net ---> http://side_project.github.io/landing/

Stealth Redirect (easyDNS specific)

Redirecting using an IFRAME, and retains the original address in the browser. Otherwise it works like a URL redirect, it just won’t look like one.

A Working Setup

My original setup at DreamHost basically pointed all my additional domains at my main domain, and removed the www subdomain that normally gets tacked on to an URL by browsers.

dorianpula.com -> dorianpula.ca
www.dorianpula.ca -> dorianpula.ca

Now traditionally you are supposed to use a CNAME for the second example. I just ended up using a URL redirects everywhere to make things simple. And a A DNS record for the main top-level domain to point to my Linode servers.

Check Your Nameservers!!!

The setup turned out real simple, but at first I could not get any of my changes to work. Or rather some of them works, other did not. It was very frustrating at first, but then the easyDNS support rep pointed out that I had not updated my nameservers for some of the domains names I transferred over. I was originally pointing to DreamHost’s rather than easyDNS’s nameservers and my changes simply were not propagating through. Once I fixed that, everything started updating as expected.

Provider Recommendations

Finally I just want to make some recommendations for anyone looking for a hosting or domain provider. I started off using DreamHost, after migrating away from GoDaddy, and I was happy with them for the longest time. They are convenient, easy to setup especially for PHP apps and pretty supportive. I highly recommend them if you have a normal website (like a WordPress blog) and want to have one stop shop.

Personally I outgrew DreamHost, when I needed something more configurable for my Python webapps. I’ve since migrated to Linode, who provide very nice, configurable and affordable VPS (virtual private server) hosting. I love using them and they support a wide variety of different OS platforms and versions.

Finally I recommend easyDNS. Their great for Canadians, supportive and care about your Internet freedoms (their takedown policy is that you have to have a real court-order or being doing something blatantly illegal rather than some flimsy takedown letter from some random legal department). I really recommend them if you want a flexible DNS/domain hosting. The problems I encountered were my own doing and lack of understanding, that the support rep helped me resolve in a few hours and after few tries.

PyCon US 2016, Maybe?

Well it looks like 2016 is off to a nice start. This weekend I submitted two proposals to talk at PyCon US 2016! And yes, I have already bought tickets, even though I am not quite sure how I’ll get to Portland, Oregon. 😛

But I thought why not? I had a lot of fun at PyCon last year in Montreal, and I’ve never been to Portland… and talking at PyCon Canada was actually quite fun. Now I don’t know if my proposals will get accepted or not. I will say that writing proposals is not fun. But fingers crossed maybe I’ll see you guys in Portland this spring… and maybe I’ll get a chance to talk as well.

As the Year of 2015 Draws to a Close…

…it makes me want to reflect on this past year and plans for the new year.

Professionally it has been a great year. I feel like an active and important part of the work we do at Points. Over the past year, I’ve gone for being mediocre in Javascript to someone who feels confident in working with that language. I have given talks both small and large, and people seek out my expertise. (Even though I don’t feel like I’m an expert at times in many things.) I have approached languages and frameworks like Erlang, Qt, C and OpenGL that in the past felt unapproachable, and have learned quite a bit about them. In general things are looking up, and hopefully next year I’ll continue this trend and expand my knowledge and experience in new directions.

Personally it has been a year of highs and lows. I am happy to feel that I’m on track with some of my personal goals. A large one was giving a talk at a PyCon, and that went pretty well. The steady progress on Rookeries has also been encouraging. I feel my faith has deepened which is an important aspect of my life. Also the realization and confirmation that I can be an interesting and fun person, that others want and seek out being with me, makes me feel better as a person. It has not been all been sunshine and roses, having to close down projects (like justCheckers) and give up on certain plans certainly has been disappointing. Also I have the nagging feeling of having not done enough writing. Hopefully I will rectify that, this upcoming year.

Maybe it is just a feeling; but seeing myself take on older projects and realize them feels empowering. There is still so much I want to do, but as this year draws to a close, I feel that I’ll be able to continue progressing forward. And I know this a bit premature, but since we are so close to the start of 2016: Happy New Year!

Merry Christmas

Today is Christmas Eve, and I just wanted to wish you dear reader: a very Merry and blessed Christmas to you and your family. And may the One who came into the world to save humanity, may also come into your life.

enter image description here

Switching to systemd

I recently upgraded my version of Ubuntu to 15.04. In the process, I found out that my init system had changed from Upstart to systemd. While having something as fundamental as the init system management was a bit annoying, it isn’t as bad as some folks are making it out on the Web. Here are some of the things I learned as worked with systemd in fixing my CouchDB server. Part of this is based on this excellent guide on using systemd for Upstart users. The other part is just experimentation on my part.

Checking the Status of All Services

sudo systemctl status

Hint: feel to grep through the output to find anything

Sidenote

At first I could not find the CouchDB service. I had to uninstall the package and purge the packages and then reinstall it:

sudo aptitude purge couchdb
sudo aptitude install couchdb

After that the service showed up in the systemd services, rather than just having an Upstart service.

Checking the Status of a Service

sudo systemctl status $MY_SERVICE

Starting and Stopping a Service

sudo systemctl [start|stop|restart] $MY_SERVICE

Logging Service Activity

This was an interesting thing in systemd. Normally one has to look at the Upstart log at /var/log/upstart/$JOB.log. systemd provides its own logging mechanism, so to see the log of a service you have to use the journalctl utility.

sudo journalctl -u $MY_SERVICE

The output behaves just like one would expect from less.

Overall Impressions

Once I got over my initial head scratching of how to use systemd, it was not bad. Rather it feels different, but not in a bad way honestly. I might look into this more closely and see if I prefer using something like systemd over supervisord for controlling even WSGI apps.

The one disappointing thing that I discovered related to my experiments with systemd, or rather specifically with CouchDB in Ubuntu 15.04. There doesn’t seem to be a way to configure CouchDB to have admin users with passwords for some reason I can’t quite fathom. In the meantime, I guess I’ll have to stick to running CouchDB from a Docker container until I can resolve the issue natively.

Sending Success and Failures Messages to SauceLabs for Splinter Web Tests

We use SauceLabs at work to handle the maintenance of environments specifically for running web tests. I highly recommend using SauceLabs over trying to maintain your own internal testing lab. (I know I helped create our Android web test environments and maintain the iOS Selenium environment. It was a lot of work and not a lot of fun especially with deadlines breathing down my neck.)

As mentioned in a previous post, my started using Splinter for writing web tests. Splinter has excellent support for running remote Selenium tests, and hence support for running tests in SauceLab virtual VMs. However one thing I drove me nuts is that when looking at the SauceLabs dashboard, I had no idea if a test scenario succeeded or not.

Adding Success and Failure Messages

It turns out that a SauceLabs VM has no notion of a successful or failed test… or more correctly a Selenium web driver doesn’t. But you can correlate a SauceLab VM test run (job) with a successful or failed test.

The only issue is that the example given uses nose and selenium rather than lettuce and splinter. So how did I get it working for splinter and lettuce? By adding the following code after a scenario is run:

@lettuce.after.each_scenario
def report_to_saucelabs(scenario):
     #Remember to quit your webdriver after you run your tests!
    scenario.browser.quit()  

    # This was the most sure fire way to check if this was a remote web driver vs a local one.
    if not isinstance(scenario.browser, splinter.driver.webdriver.remote.WebDriver):
        return 

    # Now gather all the failed steps and figure out what status needs to get sent
    failed_steps = [step for step in scenario.steps if step.failed]
    test_successful = bool(failed_steps) == False

    # Finally we have to connect to SauceLabs using the saucelabs Python client (pip install sauceclient)
    sc = sauceclient.SauceClient(SAUCELABS_ACCOUNT, SAUCELABS_ACCESS_KEY)
    sc.jobs.update_job(scenario.browser.driver.session_id, passed=test_successful)