Earlier this week I finally made the plunge to upgrade my VPS to Ubuntu 16.04. With a minor hiccup surrounding supervisord (which I probably can avoid if I go the systemd route) not being enabled at boot, the upgrade was simple for both my WSGI and Node webapps.
I can not say the same thing about my WordPress/PHP installations. (Installations that I hope to transition off to Rookeries once that software becomes more stable.) It took me a few hours to track down and resolve the problems. Hence I am posting this article, to hopefully save someone else’s time when they do the same upgrade.
Upgrading to PHP 7.0
Ubuntu 16.04 makes the switch away from PHP 5 to PHP 7. So I had to switch to php7.0, php7.0-fpm, and php7.0-mysql from their PHP 5 equivalents. The location of the running UNIX socket has changed from /var/run/php5-fpm.sock to /var/run/php/php7.0-fpm.sock, as did the PID files.
Updating the PHP-FPM configuration
Running WordPress using FPM (Fast Process Manager) and NGINX, requires turning off the path translation in php.ini file. This can be done by uncommenting the line cgi.fix_pathinfo=0 found in the configuration file /etc/php/7.0/fpm/php.ini. Again these files have moved from the old location. After you’ve done this remember to restart the FPM service using the new systemd utilities: sudo systemctl restart php7.0-fpm.
Updating the NGINX configuration and Solving the Blank Response
This is the tricky part. After updating my NGINX configurations to the new UNIX socket path, and restarting NGINX, I found that I got blank PHP responses. Everything else worked, expect that any PHP page would not render. And not render by not rendering any content in the body of the responses. That led me down a few rabbit holes, and researching how to re-architecture my setup using Docker. Eventually I stumbled across a blog entry with the solution to the blank PHP response issue.
In a nutshell, with the NGINX upgrade one of the parameters needed for FastCGI went missing namely the fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; bit. Oddly this parameter appears in /etc/nginx/fastcgi.conf and not the /etc/nginx/fastcgi_params file that I normally include in my NGINX configs. Anyways after adding this line and restarting NGINX using sudo systemctl restart nginx everything worked correctly. Below I’ve included a sample NGINX configuration that should work.
I was going to write about why needing an app isn’t a good way to try get into business. (Hint: It is very expensive, if you can’t code it yourself.) However something else popped up, namely I am trying to sell off two domains: justcheckers.org and justcheckers.net that I own. These were for the justCheckers project, that I’m essentially shutting down. Anyways if you are interested in buying them off me, feel free to contact me.
I will say that the experience so far has been interesting. Thankfully there are some good resources for selling a domain or site. I’ve looked at two companies that handle the auction and transfer of these things Flippa and Sedo. Both are quite legitimate, but there were some bad reviews and some people claiming to have been scammed. I’m trying out Flippa first since it was founded by the fine folks at SitePoint, who I quite like for other reasons. Anyways, I’m hoping that I can find a good home for the domains.
I’m looking forward to PyCon Canada 2016 that will be happening November 12-13 in Toronto. I submitted two talk proposals and I’m hoping that one of them gets accepted. But regardless I am looking forward to the conference. If you are at the conference, and you want to meet up just message me via Twitter @dorianpula. Also I plan coming out to the sprints that I’m hoping will be happening afterwards. See you there!
I have not had a chance to blog in a while. Aside from the usual busyness of life, and the occasional bouts of illness, I have been distracted by a few new thing I’ve been learning: Rust, Kivy and Electron. I’ll write about Kivy and Electron in a future post. And a lot of that is centred around the upcoming product launches for Amber Penguin Software. But that is again for another post.
For the longest time, systems level programming (especially operating systems) have fascinated me. As part of that, I tried to learn the languages used to implement systems namely C and C++. While today I feel more comfortable with these languages, these still scare me with either their complexity (C++), their programming tools (gdb, gcc, autoconf, and minions) and their potential to do horrible things to your system if you are not careful. And bugs can be incredibly difficult to trace down and debug. So I while I have tried to code more C and C++, I still avoid them for these reasons.
Also I recommend listening to the New Rustacean podcast to learn Rust as well. It is not only informative, but very well executed by host Chris Krycho. So far I’ve listened to 10 episodes, and between the podcast, the koans and simply playing with Rust, I’ve learned a lot about Rust. In fact I feel more comfortable with Rust now then I have ever felt with C or C++.
In general UIs in Rust is a weak point for the language. Then again UI libraries are not the simplest thing to get off the ground, and it might be easier to rethink how we build them in general. Again this something I can get to in a future post.
At the end of May, I presented a talk at PyCon 2016, on using Docker with Python microservices. You can imagine the rush I felt getting to present on such a popular topic at such a large and important conference as PyCon! While it took me a while to recuperate after PyCon and Portland both of which were amazing. I would definitely do another talk at PyCon given the opportunity. Anyways I hope you enjoy watching the video of the talk! Below the video I also wrote about preparing for the talk, its reception, and a bit of the controversy that it stirred up below the fold. 🙂 (And I apologize for the lateness of this post, its been sitting in my backlog waiting to get finished for a few weeks now. 🙁 )
Microservices and Docker are all the rage for developing scalable systems. But what challenges will you face when developing and deploying Python apps using Docker to production? This talk goes into the real-life lessons learned from creating, deploying and scaling Dockerized Python applications.
About the talk
Preparation for the Talk
PyCon talks definitely take quite a bit of time and effort to prepare. In my case, the talk took 3 major revisions before becoming the talk that I actually presented at PyCon. What started off as a intro to some of the concepts of Docker with some minor Python points, became more of a lessons learned targetted at intermediate to advanced developers. One of the things I wished I had (and I planned to but didn’t pull of) was to mention and thank my team for helping me preparing my talk. So thank you Kevin Qiu, Biniam Bekele, Yele Bonilla, and Gavin D’Mello for all your support, sitting me through three versions of my talk, and all the amazing feedback! I’ll make sure to include a slide with thanks next time.
Overall the reception of the talk was amazing! The talk turned out quite a crowd, in fact filling up most of the room. (I’m not sure of the capacity of the room but I estimate over 300 people attended). I was pretty nervous, but with the exception of a few stumbles, I think I pulled off the talk quite well. I really enjoyed some of the questions that were fielded during the Q&A session, and also privately afterwards. I wish could of answered some of the Docker Machine and Amazon ECS questions better, but I simply have not worked with both technologies long enough to give proper advice.
The most surprising aspect of the talk was the controversy it stirred up. At the end of the Q&A you can hear some comments from a young lady about where I supposedly went horrbily wrong, and how there were tweets flying back and forth about it. I had turned off the notifications on my phone when I got up on stage, to avoid getting distracted. She persisted with telling (or trying to explain) what was wrong in the private gathering afterwards. Unfortunately she did not do a wonderful job of communicating, and I felt it took away time from others to ask their questions. It didn’t help her case that she admitted to being a novice at Docker. Please don’t that as an attendee, there are better ways to disagree and communicate that.
I later approached by a gentleman (thank you whoever you are), who mentioned I should go talk to the OpenShift guys since they had some concerns about my talk. News of the Twitter controversy worried me, because I hated the notion that I had gotten on stage and toled people to go and do the wrong thing. Especially when apparently I’m telling the opposite of what Glyph from Twisted said to do. After a brief chat (and a nice demo about their cool Kubernetes suite) from the OpenShift guys, I found out that Graham Dumpleton, the creator of mod_wsgi and who works on OpenShift had done a live tweeting commentary during my talk, where he disagreed with a few of my points. Long story short, eventually I was able to chat with Graham. He was a great sport and explained his points. Interestingly enough I had also talked with the folks at Docker. And they agreed with the points in my talk, and the logic behind my points. Essentially most of my points were based off the best practises they proposed.
Anyways I listed a few of Graham’s points with links to his blog posts (thanks again Graham!), and some of my quick thoughts on each one. A quick disclaimer about some of my points: the advice I gave worked for us in our datacentre, and that it might not work for others in other environments. It should work well, it might not be perfect, but it worked for us, and some of the folks at Mozilla. I gave a disclaimer at my other talk on a Ansible setup for WSGI apps at PyCon Canada, and I thought it was superfluous. But it turns out it is a useful thing to mention, and be explicit.
So the slide that caused a good portion of the controversy was the base image one. There I had provided an example Dockerfile on half the slide and discussed about base images and good Dockerfile practises, with points on the lower half. Now the example was meant as a toy and not necessarily complete. It is difficult, even impossible to present a well formated, perfect Dockerfile in that context. There is only so much room on a slide to fit both an illustrative example and some explanatory points. That is why I included links to some samples, that hopefuly did a better job of it.
Ah yes, the “enfant terrible” of my talk. 🙂 If you want to be controversial in your talk, mentioning something like this will get people’s attention. (Ironically, it was not my desire to stir up a controversy). Graham post a while back why you might want to use virtualenvs in your Dockerized app. It is a longish post, so I’ll give a shortened version. Basically when you base your image off some distro (say Ubuntu, Fedora or what not), there is a good chance of bringing in more Python packages in your system site packages than you expected. e.g. You’re building a Flask app, and the package maintainer included a version of Werkzeug in the base Python install, so now when you pip install Flask as part of your requirements you get the wrong version of Werkzeug.
And that is a valid point (with my example)… except if you use something like the official Python 2.7 base image… which installs just Python. I would argue that you would catch and resolve this issue, if you are auditing your Docker images. (And you should be always doing your due diligence and checking your base and resulting images. ) So yes… you don’t really need virtualenvs, but you can also use them if you are concerned that you might be getting conflicting packages.
Graham was right about the adding volume mapping in the Dockerfile being problematic. You should not define volume mounts in your Dockerfile, since they create extra files with sudo-like permissions on the host (see /var). In your own datacentre that isn’t a problem. A multi-tenant cloud provider like OpenShift, would disallow you to create those files. The documentation argument I provided is not all that useful, since you can document the mountpoints in the README that you would provide with the Docker image.
Base images are hard to get right. And there is a lot of debate whether or not to use tooling instead of base images. Graham says his warpdrive tool will do that sort of a thing. At work we build out our own tooling for building “standard” service Dockerfiles, and that just add another level of abstraction. I prefer base images since it while not ideal, provides less levels of abstractions that can get in the way when you’re debugging your Dockerfile setup. But your mileage may vary here.
So yes, good base images are hard. Try not to build your own unless you find it really useful and you have a great base to work from.
Installing GCC/Build Tools
In an ideal world one ought not have to include GCC, Python dev headers and so on. Yes, one can pip install using wheels, but that doesn’t always work out.
Formatting of the RUN command. This is not one of Graham’s points, but it did come up. Yes, you should format the RUN commands, with a line for each command and using a \ line continuation for readability. My slide didn’t have enough physical space to do so. My Rookeries example does a better job of this.
Running as Root
Graham is right, you should not run containerized apps as root. That is a bad security practise that can lead to an attacker compromising your Docker host via a privileged account on your Docker container. Again a bad example on my part, I should of added a USER command and dropped the VOLUME line, or maybe rethought the use of an example.
UWSGI and the HTTP flag
No, you don’t need it and you should use the UWSGI protocol if you put an NGINX container before your WSGI container. I left the flag in to make sure the example Dockerfile was runnable. My bad on trying to get a good illustrative example, but it wouldn’t be a good idea in production unless you feel comfortable exposing UWSGI to the direct HTTP traffic.
If you’re wondering why I’ve been so quiet these past few weeks, it is because I’ve been busy preparing to go to PyCon US in Portland this year!
I am very excited not only to be attending, but I will be giving a talk at PyCon US this year! I will be talking about Dockerizing Python microservices, and some of the lessons we’ve learned along the way at work. My talk will be on the first day (Monday May 30th) at 3:15-3:45 PM (PST). Videos of the all PyCon talks should be available a few days after the talk.
Finally I will be around in Portland for a few days after the sprints as well. I have never been to Portland, so I want to check out some of the sights around there. Let me know via Twitter or email if you want to meetup with me while I’m there. 🙂
I was hoping to have a new entry for you this week. Unfortunately I am swamped with non-blogging work at the moment, and I need to concentrate on this for the next couple of weeks. So I’m taking a break from blogging for a bit. I should be back to my regular blogging schedule in the next couple of weeks.
Apologies for missing last week’s scheduled post and being late with this week’s post as well. I’ve been putting off writing articles and refilling my queue, with other things that have been (or seemed) more important than blogging. Either way, I’ll try to fix this so that next week I’ll be back to my regular schedule. –Dorian
These past weeks right before the start of the new year, I have been experimenting with something new. As part of trying to use server-side rendering for React client inside Rookeries, I decided to figure out how to achieve this using NodeJS. To kill a couple of birds with the same stone, I decided to use this as an opportunity to play around with ES6. In these next few blog post I will write about some of the lessons I learned along the way.
The Project and Its Architecture
To help me focus my learning, I decided to concentrate around a project that would provide a skeleton for my learning. I choose to recreate one of my earlier Flask project, which runs the Amber Penguin Software website. This web application acts as a cross between a static file website and a CMS, by serving template pages that render Markdown into the body of each page. The routing is a fairly trivial look up of flat files, and returning a 404 error page when a page is not found. The tech stack being Flask, a simple [J]inja2](http://jinja.pocoo.org/) template and Markdown as the Markdown rendering engine.
My project consisted of four phases:
Recreate the current setup using a NodeJS tech stack,
Add a simple JSON API to host the API and a simple React component that was “renderable” via the server and the client.
Build out the React app to handle routing and retrieving the content of each page, with a first time server load and subsequent calls via Ajax calls to the JSON API from the frontend React components.
Host the completed app using my existing Ansible setup.
The first task consisted of figuring out what NodeJS technologies I could use to recreate the Python/Flask app. Turns out that the language specific communities in the web app world like to borrow heavily from each other. Just as Ruby’s Sinatra microframework inspired Python’s Flask, so did Node’s ExpressJS take notes from Flask. Jinja2 inspired Mozilla’s Nunjucks and a bunch of other similar templating libraries. (I ended up using Nunjucks since it is the most mature library) Marked replaced Markup. The tricky part was actually replacing Python’s io.open() to open files. With a bit of experimentation I figured out how to use Node’s fs (file system) module and its readFile() and readFileSync() methods.
In short I could translate the tech stack this way:
Flask ⇒ ExpressJS
Jinja2 ⇒ Nunchucks
Markdown ⇒ Marked
Next time, I’ll go into the details of setting up the ExpressJS apps and routes.
For the past couple of days instead of working on actual development work related to any of my projects, I’ve been transferring all of my domains from DreamHost, my old hosting provider to a new DNS provider. I was looking forward to a gentle switch over, to my new Canadian (eh!) DNS provider easyDNS. Unfortunately like many technical problems, I ended up spending more time and effort than I expected to originally. (Enough effort that I’m late with posting this blog update today.)
It turned out that DreamHost made enabling my desired setup real easy, and hid a lot of the technical difficulties of setting up DNS records. easyDNS is a lot more flexible, but then I’m not a DNS record expert so getting a similar setup was tricky. Fortunately the fine folks at easyDNS are really responsive by email, and a few emails back and forth we arrived at a setup that worked nicely. Most of this came down to not understanding the terminology and not checking the right places
DNS Record Terminology
This the main record that maps a domain name to an actual IP (v4 and v6) address. In my case this would be the IP address of the server hosting all my webapps.
my_domain.com ---> 123.456.789.10 (not a real IP that I own)
Canonical Name or Alias, this is used to map a subdomain (e.g. www, app, etc.) to another part of the domain or another top-level address. This retains the subdomain name in the address bar of your browser.
Redirects (usually via a HTTP 301 Redirect) an address to another domain or location. (This turned out to be the option I needed for most of my sites) A redirect naturally will change the URL in the browser’s address bar.
Now traditionally you are supposed to use a CNAME for the second example. I just ended up using a URL redirects everywhere to make things simple. And a A DNS record for the main top-level domain to point to my Linode servers.
Check Your Nameservers!!!
The setup turned out real simple, but at first I could not get any of my changes to work. Or rather some of them works, other did not. It was very frustrating at first, but then the easyDNS support rep pointed out that I had not updated my nameservers for some of the domains names I transferred over. I was originally pointing to DreamHost’s rather than easyDNS’s nameservers and my changes simply were not propagating through. Once I fixed that, everything started updating as expected.
Finally I just want to make some recommendations for anyone looking for a hosting or domain provider. I started off using DreamHost, after migrating away from GoDaddy, and I was happy with them for the longest time. They are convenient, easy to setup especially for PHP apps and pretty supportive. I highly recommend them if you have a normal website (like a WordPress blog) and want to have one stop shop.
Personally I outgrew DreamHost, when I needed something more configurable for my Python webapps. I’ve since migrated to Linode, who provide very nice, configurable and affordable VPS (virtual private server) hosting. I love using them and they support a wide variety of different OS platforms and versions.
Finally I recommend easyDNS. Their great for Canadians, supportive and care about your Internet freedoms (their takedown policy is that you have to have a real court-order or being doing something blatantly illegal rather than some flimsy takedown letter from some random legal department). I really recommend them if you want a flexible DNS/domain hosting. The problems I encountered were my own doing and lack of understanding, that the support rep helped me resolve in a few hours and after few tries.
Well it looks like 2016 is off to a nice start. This weekend I submitted two proposals to talk at PyCon US 2016! And yes, I have already bought tickets, even though I am not quite sure how I’ll get to Portland, Oregon. 😛
But I thought why not? I had a lot of fun at PyCon last year in Montreal, and I’ve never been to Portland… and talking at PyCon Canada was actually quite fun. Now I don’t know if my proposals will get accepted or not. I will say that writing proposals is not fun. But fingers crossed maybe I’ll see you guys in Portland this spring… and maybe I’ll get a chance to talk as well.