Markdown Documentation with Sphinx

Lets take a break from setting up CouchDB in Rookeries, and discuss
documentation.

I recently made the switch to using Markdown for the majority of the prose
style documentation for Rookeries. Originally I wanted to support both
reStructuredText and Markdown. However for reasons I’ll write about, I will
concentrate on supporting Markdown in Rookeries.

Requirements

What do I expect from documentation for Rookeries? I want an automated setup
that will allow for easily writing prose documenation, API level documention
and also that the documentation can act as a test fixture as well. There is no
need to duplicate efforts maintaining two sets of documentation: one for the
code and one as a test sample.

Avoiding Duplication – Unifying Documentation and Test Fixtures

Some of the tests for Rookeries, require actual content living inside a
database. This is an excellent way to dogfood Rookeries, by forcing it to
handle some of the content it will have to support. Currently the test
fixtures live separately from the documentation. However the actual fixture
refers to the path for the sample files, as part of its setup. Whether this
path points to the test fixture folder or any other other folder in the
Rookeries source tree is arbitrary. So why not have the same documentation as
both project documentation and sample test data?

Keeping API Documentation

However I still want API documentation, not only for my sake but to allow
future contributers to extend Rookeries or build plugins for it. Going with
simply just the prose documentation is not enough. So I wanted to keep my
current Sphinx autodoc setup, or have something similar parse docstrings in
my code and generate gorgeous API documentation.

Result

After some trials, I hit upon a way to support both Markdown and
reStructuredText in my Sphinx powered documentation. The Markdown files are
being referenced in the tests, so all my requirements were met. Huge thanks
to Eric Holscher (dev on the ReadtheDocs sites) for figuring this out
originally.

Alternatives

The other alternatives were either use a Markdown-only documentation
generator (mkdocs) or support reStructuredText in Rookeries.

What not Use mkdocs?

mkdocs is an awesome project that uses Markdown to generate documentation via
Sphinx.
mkdocs works remarkably well, has a few nice themes, abstracts away lots of the Sphinx configuration and has a nice workflow for writing the documentation. However its API documentation story is lacking.

State of Autodoc API Documentation

While the initial impression I got from mkdocs was excellent. However when I
tried to use mkdocs for generating API documentation I ran into problems. Judging by the roadmap of mkdocs, there is not much desire to support API documentation. There is an experimental project to hook up mkdocs to the Sphinx autodoc.
However this did not work for me.

Why not Support reStructuredText in Rookeries

Alternatively, I considered going in the other extreme of only supporting
reStructuredText. Aside from the fact that RST syntax is not always the
easiest to remember, I ran into some more technical challenges.

State of RST Frontend Clients

The first major issue is that there are no reStructuredText frontend clients.
There are lots of Markdown clients for Javascript, but none for RST. While I
do plan on rendering content on the server side, I do would prefer to have the
option of doing some of the rendering on the client side.

Working with Docutils in Python

Less of an issue, but more of encumberance is working with reStructuredText in
Python. The docutils library is the standard way to convert RST into a number of formats. The documentation for docutils is horrible. I wish could be more charitable but it took me a good 30-45 mins poking around the docs to figure
out how to programmatically render reStructuredText into HTML:

import io
from docutils import core as docutils_core

with io.open('my_rst_sample.rst') as src:
    beta = src.read()

doc_core.publish_string(beta, writer_name='html')

Mind you this only gets you as far as having a full HTML documentation, that
you need to muck around with. I did get partial HTML rendering earlier on
Rookeries’ history, but it was not obvious or simple to get to. I don’t need
docutils’ grand, book-publishing ready setup nor their command-line tools. I
just need something to render marked-up text into HTML for a blog.

In the future I might wade into dealing with reStructuredText, but I will pass
on that for now. There are more important open issues with Rookeries than
which exact format to use.

Use pandoc

When the topic of rendering between markup format comes up, so does using
pandoc. Pandoc is a great tool for converting between various markup and document formats. And it does a decent job of translating RST to Markdown and back again:

pandoc -f rst -t markdown sample.rst -o sample.md

Now I don’t plan on relying on Pandoc for Rookeries. But I might require it
when I need to import and export data from non-Rookeries blogs and sources.

Markdown in Sphinx

The solution I finally settled on was making Sphinx render Markdown. I found a great article on configuring Sphinx to handle both reStructuredText and Markdown.

Setting up Sphinx to Render Markdown and reStructuredText

The setup is fairly straightforward. Install the remarkdown library that adds
Markdown support to docutils which Sphinx will use:

pip install remarkdown

Next add the following configuration to the Sphinx conf.py configuration:

from recommonmark.parser import CommonMarkParser

# The suffix of source filenames.

source_suffix = ['.rst', '.md']

parsers = {
    '.md': CommonMarkParser,
}

Viola! You now can mix and match Markdown and reStructuredText in your
Sphinx documentation. I would stick to RST when dealing with more
complicated macros for Sphinx (like the releases changelog add-on I use).

Markdown in WordPress

A final note: I want to work toward transitioning this site to Rookeries. So
I started playing around with how it would be to write all my blog posts in
Markdown. I am using Sublime Text for my editor. On the WordPress side, I
found that WP-Markdown is a nice WordPress plugin for writing content in
Markdown and then rendering it to HTML.

Using CouchDB in Rookeries – Part 2 – Setting Up a Remote CouchDB Server

Overview

In the second instalment of my series on adding CouchDB support to
Rookeries, I’ll be talking about how I provisioned CouchDB on my remote
server.

Now it sounds counter-intuitive why I would talk about creating and
populating CouchDB databases first before writing about installing
CouchDB. The reason for this backwards step, is that I already have
CouchDB installed locally. At my daytime job at
Points
we use CouchDB extensively, so I already have
CouchDB installed locally on my workstation. I have also worked with the
Operations team to provision CouchDB servers. However it is a different
story when trying to provision and configure CouchDB yourself on your
own servers. This blog post details some of the things I learned along the way.

Since the setup of Couch is a bit involved, I will divide this up over two blog
posts.

Provisioning Rookeries with Ansible

One of the stated goals of Rookeries is create a developer-friendly
blogging platform that is easier to install and setup than WordPress.
That is a tall order for a Python WSGI app, since there is some more
setup involved than just installing Apache and mod_php and unzipping
Wordpress into a folder. (Even with WordPress there is more involved
when doing a proper and maintainable setup.)

So while putting up a production ready Python WSGI app is more involved
technically, this does not mean the end-user needs to experience this.
That is where the Rookeries Ansible
role
comes into
play. I created that Ansible role to encapsulate the complexity of the
installing Rookeries. (This role uses [the nginx-uwsgi-supervisord Ansible
role which I wrote to handle the actual setup of a WSGI app on an bare-bones
Ubuntu server]
(https://bitbucket.org/dorianpula/ansible-nginx-uwsgi-supervisor).) All of the
details concerning the setup and configuration of a CouchDB server for a
Rookeries installation is included in the Rookeries Ansible role.

Installing Latest CouchDB on Ubuntu Linux

I use the latest Ubuntu LTS (14.04) for both my development and
deployment environments. Having the same environment reduces the effort for meI
to take Rookeries from development to production. However the
latest version of CouchDB for Ubuntu 14.04 is 1.5.0 and I wanted to use
the latest stable version of CouchDB. While upgrading between CouchDB
versions is straightforward, I know that I am less likely to upgrade to the
latest version of CouchDB once Rookeries stabilizes. And there is no
point on starting off with an older version of your database right from
the start of a project.

Fortunately the CouchDB devs distribute the latest stable version of
CouchDB via a convenient
PPA
. The
instructions on how to install CouchDB via the PPA is right on the
Launchpad page.

Installing via Console

# add the ppa
sudo add-apt-repository ppa:couchdb/stable -y
# update cached list of packages    
sudo aptitude update -y
# remove any existing couchdb binaries
sudo aptitude remove couchdb couchdb-bin couchdb-common -yf
# install the latest
sudo aptitude install couchdb

Provisioning via Ansible

The Rookeries Ansible role translates those instructions (minus the
removal of existing packages) to:

- name: add the couchdb ppa repository
  apt_repository: repo="ppa:couchdb/stable" state=present

- name: install couchdb
  apt: pkg=couchdb state=present
  with_items:
    - couchdb
    - couchdb-bin
    - couchdb-common

Running CouchDB

Now that we have CouchDB installed, we need to control it like we would any
other service on Linux server. Surprisingly enough when I tried to find the
packaged CouchDB service scripts (using the service command), I did not find
anything!

> sudo service --status-all
# ... A lot of entries but no couchdb ...

Turns out that CouchDB package comes with an Upstart script rather than
a traditional System V initrc script. (That itself is probably not a bad
thing.)

> sudo status couchdb
couchdb start/running, process 5311
# There it is.

Starting and stopping service through Upstart is done via the ‘start’ and
‘stop’ commands. There are also ‘reload’ and ‘restart’ commands.

> sudo restart couchdb
couchdb start/running, process 15987

Side Note About Upstart vs Services vs Systemd

Update: I found an article that explains the evolution and the current situation of Linux service management. It explains things much better than I do and in much more detail. I learn quite a bit from it.

If you follow Linux developments and news, you might have heard about the development and controversy around new init systems. I will try to explain \nthese developments briefly here since we are on the topic of service scripts.

The old System V style for service scripts (in /etc/init.d/ or\n/etc/rc.d/) is not flexible when it comes to managing dependencies and running outside of the prescribed run-levels that happen during boot and shutdown.
However there is disagreement about what would would be a better alternative. Upstart was Canonical/Ubuntu’s attempt to create a more flexible system for managing services. However Debian and many other Linux distributions have recently switched over to another such system called systemd. Part of the controversy about systemd stems from the architectural design of systemd (which seems monolithic at first glance as it tries to solve service management, logging and few other seemly unrelated system level issues).

Another part of the controversy stems from how the project lead’s handled his previous project: PulseAudio. I will admit that my first experiences with PulseAudio were pretty rocky, and I missed how well using plain old ALSA worked. However these issues have since gone away, and I can not think of any PulseAudio or any audio issues I’ve encountered in Linux recently. (Ironically Windows 7 gives me more grief with sounds issues than Linux nowadays.)

I personally don’t know enough about systemd to form an opinion. Sure I am a bit anxious to see how this all plays out. However this is a case of wait and see. In the meantime be aware that the exact semantics on how you interact with services will change in the near future.

Update #2: An interview with Lennart Poettering about systemd, its design and intentions

Provisioning with Ansible

Fortunately Ansible does not make a distinction of what the underlying
service script setup is used. The Ansible service module works with initrc,
service, Upstart and systemd services without complaint.

In the Rookeries Ansible restarting the CouchDB service becomes a single
task.

- name: stop couchdb server
  service: name=couchdb state=restarted

Next Up

In the next blog post I’ll write up about configuring and securing
CouchDB.

Using CouchDB in Rookeries – Part 1 – Creating CouchDB Test Fixtures Using Bulk Updates

Back Story

I’ve been working on adding database persistence support to Rookeries. Instead of writing down my findings and losing them somewhere, I plan on documenting my findings and thoughts in a series of blog posts.

In the case of Rookeries that means connecting to and storing all of the journal, blog and page content as CouchDB documents. Since I want to implement this properly, I intend on adding tests to make sure I can manage CouchDB documents and databases properly. Rather than writing a number of tests that mock out CouchDB, I want to use a test database along with known test data fixtures for my tests.

Python CouchDB Integration for Rookeries

When looking at different CouchDB-Python binding libraries for Rookeries, I settled on py-couchdb. Manipulating CouchDB essentially means communicating with its REST API, so it is important that a Python binding library uses the sane approach to communicate with HTTP REST API. Unfortunately the more popular CouchDB-Python library uses only Python standard library and implements its HTTP mechanism in using standard library’s unintuitive modules. In contrast py-couchdb uses requests for querying the CouchDB server, making it a much more maintainable library.

Also py-couch offers Python query views, which I very much enjoy using at work. I still need to verify how well the library’s Python query server works in practise, but I will write a future blog post about my findings. py-couchdb lacks CouchDB-Python’s mapping functionality, which behaves similar to sqlalchemy’s ORM. However I am still debating on how I want to map between CouchDB documents and Pythonic domain objects.

Creating and Deleting CouchDB

Creating and deleting a database in a CouchDB server amounts to issuing a HTTP PUT or DELETE request against the server. This REST API provides no safety net nor confirmation about deleting a database, so one needs to be careful. py-couchdb provides a nice and simple API to create or delete a database as well.

Using cURL

# Create a CouchDB database
curl -X PUT http://admin:password@localhost:5984/my_database/

# DELETE a CouchDB database
curl -X DELETE http://admin:password@localhost:5984/my_database/

Using py-couchdb

# Create a CouchDB database
server = pycouchdb.client.Server('http://admin:password@localhost:5984')
server.create('my_database')

# DELETE a CouchDB database
server.delete('my_database')

Inserting Fixture Data

Now that I can create a temporary test database, I need to populate it with some test data. Fortunately it turns out that CouchDB has a neat and fast way to insert data in bulk using its _bulk_docs API. With this API can easily come up with a number of documents that I want to input as test data.

Fixture Data Format

The format for inserting a mass of documents is:

{
  "docs": [
    {"_id": "1", "a_key": "a_value", "b_key": [1, 2, 3]},
    {"_id": "2", "a_key": "_random", "b_key": [5, 6, 7]},
    {"_id": "5", "a_key": "__etc__", "b_key": [1, 5, 5]}
 ]
}

Note that adding a _id specifies the CouchDB ID for the document.

Using cURL

# Bulk doc insert/update using the JSON data file.  One can also do this manually with a string.
curl -d @sample_data.json -X POST -H 'Content-Type: application/json' \
   http://admin:password@localhost:5984/my_database/_bulk_docs

Using py-couchdb

UPDATED: 2015-Aug-22 I was totally wrong about the format of doing bulk updates to py-couchdb. Rather than the JSON format needed for CURL, a simple list of Python dictionaries works with the save_bulk() method. I’ve updated the code example.

import io
import json

# Best practice for writing unified Python 2 and 3 compatible code is 
# to use io.open as a context manager. 
with io.open('sample_data.json') as json_file:
    my_docs = json.load(json_file)
database = server.database('my_database')
# See my update note above, about the format save_bulk expects.
database.save_bulk(my_docs['docs'])

Conclusion

And with that, I have what I need to have repeatable tests. Hopefully this will land in Rookeries in the next couple of days.

Other Resources

Revived

…and we’re back!  Or rather the site is, thanks to Eric who helps admin the VPS that this site runs on.

So much has happened in the meantime: PyCon Montreal, furthering my experience in working on Python microservices + Docker + Ansible, my dabbling in the startup and JS worlds.  And life in general, with friends getting married and life in general moving forward.

One of the lessons learned in this outage, is to keep better backups and use automated configuration managers when administrating a site.  I’d love to talk about my Ansible playbooks that are just now approaching the point where I have almost completely automated backups and deployments.  But I’ll do so at another time.

Ansible Role for NGINX, UWSGI and Supervisor Released!

What better way to start 2015 than to release new software?

As part of my efforts to create Rookeries, a modern Python-based CMS as a replacement for my WordPress sites: I am releasing an Ansible role to make it easier to setup WSGI apps on a private server.

The nginx-uwsgi-supervisor role is available on Ansible Galaxy.   This role setup NGINX and the UWSGI (WSGI app server) and supervisord infrastructure to make installing Rookeries or another WSGI app a breeze.   The goal is to make a Rookeries site as easy or easier to install and maintain than a WordPress site.

All the code for the role is host on Bitbucket, and mirrored on Github.

I am especially excited since this my first ever, fully functional, open source release.  I hope enjoy using and makes their life easier when build webapps in Python.

…And We’re Back!

Or rather I am back.  As in I am going back to blogging.  I apologize for the months of silence.  Moving houses, and migrating web hosting providers will do that to a person.  Migrating the web hosting to a completely self-managed environment was quite a learning process, and took quite a bit of time.  I did not realize at the time, that my websites would be down for months.  Fortunately everything is back to normal now.

I won’t commit to posting on a regular schedule, since that is simply not realistic.  However I missed quite a few excellent opportunities to blog in a timely manner.    Especially everything surrounding PyCon and all the new things I’ve learned since that time.  I will try to make that up by writing articles about events, knowledge and ideas.

It is good to be back.

Now a Professional Pythonista at Points!

I have been working for the past month as a Software Development Engineer at Points International.  While my role is not officially as a Python developer, a large portion of my work is building Python applications, services and libraries.  Also I get to develop in Java as well and maintain some very well engineered systems as well, so I get to deal with both worlds.  Even after a month, I am super excited to work at such a cool company and with awesome people.  It really feels like a bit of a dream job, in terms of what technology I get to use (Python, Linux desktops and distributed version control systems, w00t!) and the processes (yes Agile and proper software engineering totally works when done right).

But it is the people within the company that really makes it shine.  I get to be surrounded by smart, savvy, and welcoming coworkers, including a number of important and active Pythonistas that I look up to.  My team is just amazing, supportive, and I feel that in this short time span I’ve become a much better developer thanks to them.  Even on stressful days I feel motivated and excited to come to work and give it my all.  I feel incredibly lucky and fortunate to be at Points. 🙂

Distro Hopping

Sorry for the much delayed update, however this year has been an hectic and busy one. (New job, new house, lots of random unexpected events along the way, like two funerals and two weddings in a single month, etc. Long story.) Plus I really hoped to change blog platforms, but that is a story for another time.

Explaining the Journey

With so many things changing in my life, I decided to change up the Linux distribution I’m running. Now I have a large set of requirements being both a developer and gamer. I need a distribution that can handle Python, Java, Android and Qt Linux development. Also I want my distro to run Steam, and handle the Nvidia Optimus graphics card in my laptop, properly.

(Sidenote: A word to the wise, avoid Optimus cards as they are a pain to setup under Linux. I got mine because I naively assumed that all Nvidia cards are easily and nicely supported under Linux. Recently I heard that Nvidia promised to help the Nouveau devs to make the Optimus experience under Linux nicer. But I would not hold my breath to wait for things to get better soon.)

Long Story Short

The shortest version of the story: After doing a fair bit of distro hopping including using some uncommon distros, I am back to using Kubuntu.

Specifically the path I took was:
Kubuntu → openSUSE → Mageia → Debian → Linux Mint → Sabayon → Kubuntu u2192

The rationale behind all this? Well read on. 🙂

Kubuntu → openSUSE

After hearing about Canonical’s plans to use their own display manager “Mir” instead of “Wayland”, and experiencing random breakage with Kubuntu I decided to change distros. When I heard that the main dev behind Kubuntu was not going to be funded by Canonical, I decided it was time to jump ship.

I decided to retrace my steps, and try new versions of distros that I used in the past. Technically before I started using Kubuntu I ran on Gentoo Linux. But I was not about to go back to compiling and configuring everything on my system. So my first stop was openSUSE.

SuSE and now its community driven variant openSUSE, always has been a very slick distro in terms of supporting KDE.  The version I was running was no different. I was also encouraged by the large number of packages available including a nice setup for both Steam and bumblebee (this being the program that adds decent support for Nvidia Optimus under Linux).

openSUSE is a gorgeous distro overall, except for one very important issue… openSUSE feels like it was built for a corporate desktop. The number of PolicyKit warnings that I received whenever I tried to suspend and resume was surreal. While I am familiar with the lingo and ideas behind SELinux, AppArmour, etc, I could not for the life of my figure out how to get my laptop to resume and suspend without some silly PolicyKit message blocking me. openSUSE was not meeting my needs.

openSUSE → Mageia

With openSUSE failing me, I decided to go further in time to my original distro Mandrake/Mandriva. I found out that some Russian firm had bought out the French made Mandriva and as part of a general restructuring effort laid off some of the maintainers. These maintainers started their own version of Mandriva called Mageia. While the distro and its infrastructure is still fairly young, I was encouraged by the fact that some experienced maintainers were behind the project.

I was amazed with the amount of polish but into a budding community driven distro. I ran against some rough edges with Python support, but those were resolved with some help and new updates. I was impressed and I took my first steps to becoming a maintainer myself. The community was very receptive and welcoming. While I ended up using Mageia for weeks, I did not stay with the distro.

Why didn’t I stay with Mageia? I could not get bumblebee running on my machine. I could of fought some more, learn how to maintain a package and help build out the distro. But after some introspection, I realized that I simply do not have time contributing as a maintainer to a distro. There is a lot of work involved, and considering everything going on in my life right now, I needed to get a distro I could rely on and work with right now.

Magiea → Debian

Debian seemed like the logical choice for a stable Linux. The distro is entirely community driven, and has been around forever. So after a bit of haggling with the network installer, I managed to get a KDE desktop running on Debian. Debian definitely run on mature, stable software, which is perfect for someone running a server or managing a desktop configuration that has been around for years. Unfortunately the Linux desktop has only become very stable and usable in past while. Also the Debian community are sticklers when it comes to open source licenses, and how distributable
the software is legally. Unfortunately again, closed source firmware and other software makes things much more difficult. Getting my Broadcom wireless network card, and my Nvidia graphics chip working was just not happening.

Also I assumed that since Ubuntu worked so well, that Debian would be just as well setup from the get-go. I realize now how much work Canonical put into configuring their Debian base and smoothing all the wrinkles out. However I was not up for doing all that work myself, just to stay with Debian.

Debian → Linux Mint

Debian stayed installed on my laptop for a mere two days, before I got fed up with it. The next logical choice to avoid Ubuntu, but get some of the niceties of the platform was to try out Linux Mint. One of my good friends runs it and she enjoys using it thoroughly. I also watched and read some good reviews about the latest stable release of Linux Mint 15, and how much polish the devs put into the KDE desktop. I was intrigued, so I tried it out.

Linux Mint 15 definitely has a lot of polish. However nothing that spectacular that does not come standard to KDE. Except for the extra System Settings panel to handle PPAs (private Ubuntu repos), which is pretty darn cool. I did run into issues with trying to run packages originally meant for Ubuntu. There were slight and subtle incompatibilities, and I eventually gave up trying to fix things.

Linux Mint → Sabayon

By now I had run into a moment of madness. No good easy-to-use RPM based distros remained to try out. Fedora sounded too experimental for my liking. The Debian universe had been pretty much a let down. I debated using Netrunner, a KDE distro, by Blue Systems. (Blue Systems being that weird German company that somehow funds KDE development on Ubuntu, Linux Mint KDE and Netrunner. But no one has an idea how they fund themselves. Maybe by European Union funds, which seems to be the popular way to fund nebulous entities and projects in Europe.)

So I had a moment of madness, and despair brought on by no new leads while looking at potential distros on DistroWatch (http://www.distrowatch.com/). In that moment I decided to try a system not based on the traditional package systems. That left systems in the Arch or Gentoo families. Arch itself fell into the too much maintenance category. Gentoo did as well. Manjaro looks promising, but I’ll wait until it matures or fades way due to its small team. I tried Sabayon Linux, something I did not expect to do.

Sabayon Linux is definitely much nicer than Gentoo to maintain. Everything worked out of the box too. Except Sabayon felt very much like an early adopters hobbyist distro. An update or a new package installation, downloaded half the universe. My laptop ran faster… and ate its battery so quickly that it would just shutdown… randomly while running on battery. I could run Steam and my development environments, just never without worrying about my laptop suddenly powering off.

I realized I could not continue on like this…

Return to Kubuntu

Now I am back to running on Kubuntu, and everything just works well enough. I could of gone back to Mageia, and hoped that the upcoming release of Mageia 4 would of resolved most of my issues. Ultimately I went back to Kubuntu, since for right now it works well enough and meets my needs.

I work with Ubuntu at my new workplace, plus I support a couple of other Kubuntu machines running at home. I no longer use the tools that caused me grief when some libraries changed in Ubuntu. For better or worse, support for new applications or hardware is targeted at Ubuntu. Also it is a bit of a relief that Blue Systems stepped in and now funds development of Kubuntu. Canonical’s plans for transitioning to Mir, still do not affect me at least on my current version. Also this might change in the upcoming release, and I maybe stuck on this version of Kubuntu for a while.

Or things maybe change, maybe Canonical will change its mind and work with the Wayland community. Maybe Nvidia will fix up their terrible driver support due to market pressures. Or maybe I will have to move off to Mageia or Manjaro eventually. In the meantime I can be productive, and once things will calm down again, maybe I’ll go on another round of distro hopping.

Update (2013 October 18): Just upgraded to Kubuntu 13.10 yesterday!  I am encouraged by the news that the Kubuntu devs will push forward on using Wayland and support Kubuntu into the future.  So it looks like I will continue using and enjoying Kubuntu well into the future.  Now I’ll just need to learn how to package DEBs, and I’ll be able to help out occasionally too. 🙂

Spring Cleaning for 2013

With Easter just around the corner and possibly spring coming shortly after–Canadians have to wait a bit longer for spring t0 properly arrive and winter to make her final exit–that it would make sense to update my blog.   Many things have changed in the past few weeks .  Like we have a new pope, Pope Francis, just in time for Easter.  (I’m not going to weigh in on my opinions of the decision of the Conclave, other than I have mixed feelings.  And each passing day does not ease my general feeling about unease.)  Some things have not changed.  Like most things in the world I guess.

With the slow coming of warmer weather, I have a good excuse for a bit of spring cleaning and growing myself.  In terms of spring cleaning, I have meant to really organize my activities and my surroundings.  Unfortunately since I had to make do without my laptop for a few weeks, that has not helped me get more things done.  Especially when it comes to dealing with my overflowing inbox.  Apologies for everyone expecting me to get back to them.  I’m getting there slowly.

I did get to play around with setting up Python on my hosting environment and with Clojure.  Clojure, while definitely useful still feels like an exercise in academics than industrial programming.  (Still one can write a full implementation of Snake/Nibbles in Clojure in under 100 lines of code?  Madness!)  Python on the other hand is too much fun to feel like work.  I considered using something like a static website generator like Nikola or benjen to port some of my websites.  But I think for kicks, I will go the route of using Flask and craft my own mini-site just because working with Python is a such a joy.

One unfortunately necessary bit of spring cleaning will be changing Linux distros again.  It seems that Canonical is doing a fair bit of wild experimentation nowadays.  Too wild and it smells like they are suffering from NIH (not invented here).  The idea to chuck out everyone’s hard work on replacing X with Wayland, with their own thing was just too much.  So it looks like I’m going back to openSUSE for good.  It is just a matter of when I get around to migrating all my systems over.  I have no real issue with Canonical doing what they want with their own distro Ubuntu.  I just don’t agree with the philosophy, and the needless experimentation, especially since I am quite happy with using a relatively standard KDE 4 desktop.

Hopefully once I finish all the spring cleaning I’ll get to finish up and show off some the projects I’ve been working on.