Django toolchain on Debian

Although Django is well packaged for Debian, I've recently come to the conculsion that the packages are really not what I want. The problem is that my server runs Debian stable, while my development laptop runs unstable, and Django revisions definitely fall into the "unstable" category. There really is no way to use a system Django 1.1 on one side, and a system Django 1.0 on the other.

After a bit of work, I think I've got something together that works, and I post it here in the hope it is useful for someone else. This info has been gleaned from similar references such as this <> and this <>.

This is aimed at running a server using Debian stable (5.0) for production and an unstable environment for development. You actually need both to get this running. This is based on a project called "project" that lives in /var/www

  1. First step is to install python-virtualenv on both.

  2. Create a virtualenv on both, using the --no-site-packages to make it a stand-alone environment. This is like a chroot for python.

    $ virtualenv --no-site-packages project
    New python executable in project/bin/python
    Installing setuptools............done.
  3. The unstable environment has a file you'll need to copy into the stable environment - bin/ The stable version of python-virtualenv isn't recent enough to include this file, but you need it to essentially switch the system python into the chrooted environment. This will come in handy later when setting up the webserver.

  4. There are probably better ways to keep the two environments in sync, but I simply take a manual approach of doing everything twice, once in each. So from now on, do the following in both environments.

  5. Activate the environment

    /var/www$ cd project
    /var/www/project$ . bin/activate
    (project) /var/www/project$
  6. Use easy_install to install pip

    (project) /var/www/project$ easy_install pip
    Searching for pip
    Best match: pip 0.4
    Processing pip-0.4.tar.gz
    Running pip-0.4/ -q bdist_egg --dist-dir /tmp/easy_install-Wu9O-U/pip-0.4/egg-dist-tmp-xjSdxq
    warning: no previously-included files matching '*.txt' found under directory 'docs/_build'
    no previously-included directories found matching 'docs/_build/_sources'
    zip_safe flag not set; analyzing archive contents...
    pip: module references __file__
    Adding pip 0.4 to easy-install.pth file
    Installing pip script to /var/www/project/bin
    Installed /var/www/project/lib/python2.5/site-packages/pip-0.4-py2.5.egg
    Processing dependencies for pip
    Finished processing dependencies for pip
  7. Install setuptools, also using easy_install (for some reason, pip can't install it). There is a trick here, you need to specify at least version 0.6c9 or there will be issues with the SVN version on Debian stable when you try to get Django in the next step.

    (project) /var/www/project$ easy_install setuptools==0.6c9
    Searching for setuptools==0.6c9
    Best match: setuptools 0.6c9
    Processing setuptools-0.6c9-py2.5.egg
    Moving setuptools-0.6c9-py2.5.egg to /var/www/project/lib/python2.5/site-packages
    Removing setuptools 0.6c8 from easy-install.pth file
    Adding setuptools 0.6c9 to easy-install.pth file
    Installing easy_install script to /var/www/project/bin
    Installing easy_install-2.5 script to /var/www/project/bin
    Installed /var/www/project/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg
    Processing dependencies for setuptools==0.6c9
    Finished processing dependencies for setuptools==0.6c9
  8. Create a requirements.txt with the path to the Django SVN for pip to install, then and then install it.

    (project) /var/www/project$ cat requirements.txt
    -e svn+
    (project) /var/www/project$ pip install -r requirements.txt
    Obtaining Django from svn+ (from -r requirements.txt (line 1))
      Checking out to ./src/django
    (project) /var/www/project$ pip install -r requirements.txt
    Obtaining Django from svn+ (from -r requirements.txt (line 1))
      Checking out to ./src/django
    ... so on ...
  9. Almost there! You can keep installing more Python requirements with pip if you need, but we've got enough here to start.

  10. Create a file in /var/www/project called This will be the Python interpreter the webserver uses, and basically exec's itself into the virtalenv. The file should contain the following:

    activate_this = "/var/www/project/bin/"
    execfile(activate_this, dict(__file__=activate_this))
    from django.core.handlers.modpython import handler
  11. Now it's time to start the Django project. I like to create a new directory called project, which will be the parent directory kept in the SCM with all the code, media, templates, database (if using SQLite) etc. In this way to keep the two environments up-to-date I simply svn ci on one side, and svn co on the other.

    (project) /var/www/project$ mkdir project
    (project) /var/www/project/project$ mkdir db django media www
    (project) /var/www/project/project$ cd django/
    (project) /var/www/project/project/django$ django-admin startproject myproject
  12. Last step now is to wire-up Apache to serve it all up. The magic is making sure you specify the correct PythonHandler that you made before to use the virtualenv, and include the right paths so you can find it and all the required Django settings.

    DocumentRoot /var/www/project
    <Location "/">
        SetHandler python-program
        PythonHandler project-python
        PythonPath "['/var/www/project/','/var/www/project/project/django/'] + sys.path"
        SetEnv DJANGO_SETTINGS_MODULE myproject.settings
        PythonDebug On
    Alias /media /var/www/project/project/media
    <Location "/media">
        SetHandler none
    <Directory "/var/www/project/project/media">
        AllowOverride none
        Order allow,deny
        Allow from all
        Options FollowSymLinks Indexes

With all this, you should be up and running in a basic but stable environment. It's easy enough to update packages for security fixes, etc via pip after activating your virtualenv.

CDBS and .install files

Last night, after dropping a package from a control file, I was wondering why my package.install file for the remaining package seemed to be ignored (upstream installs a bunch of stuff that I don't want installed in the Debian package; the symptom was all that junk was making its way into the package). Turns out it comes down to the following logic in CDBS:

ifeq ($(words $(DEB_ALL_PACKAGES)),1)
        DEB_DESTDIR = $(CURDIR)/debian/$(strip $(DEB_ALL_PACKAGES))/
        DEB_DESTDIR = $(CURDIR)/debian/tmp/

i.e. if there is only one package, then by default install in debian/package. Therefore whatever make install does is what you end up with in your package. Although I can see the reasoning behind this, it wasn't what I wanted since I need the package installed into a temporary location (i.e. debian/tmp) which then uses a .install file to pull out only those files I want. The solution is simple, override DEB_DESTDIR to $(CURDIR)/debian/tmp/ in rules.

I hope this saves someone the half hour or so I spent investigating why my install file was "corrupt"!

Iceweasel to Firefox User-Agent

If you use Iceweasel and a website is complaining you don't have Firefox® (such as the Google toolbar installation page), download the user-agent switcher plugin and use it to import this specification. You can then "fake" being a plain-old Firefox® running on on Linux; the Iceweasel site provides one that fakes Firefox® on Windows XP should you need that.

CDBS + Autotools + Python

There is a bit of an art to getting CDBS to build packages with Python extensions made with autotools, so hopefully this will help someone. This might be appropriate where you have a library which ships with some Python modules you wish to package.

The basic idea is to re-build separate trees for the various Python versions you wish to support, and then install these separate builds to the same place before creating the final package.

  1. Firstly, add an XS line to the top of your control file with the Python versions to support, such as XS-Python-Version: >=2.3. Then get this into a variable in your rules file with pyversions

    PY_VERSIONS = $(shell pyversions --requested debian/control)

    You'll also want build dependencies on python-all-dev, python-central (>= 0.5).

  2. Your Python package also needs to be setup with the correct variables so it has the right dependencies, etc, something like

    Package: python-package
    Architecture: any
    Section: python
    Depends: ${shlibs:Depends}, ${misc:Depends}, ${python:Depends}
    Provides: ${python:Provides}
    XB-Python-Version: ${python:Versions}
    Description: Python bindings for blah
     Description here
  3. Now the magic is to get multiple builds for each version of Python requested. The first thing to do is add a configure/python-packagename target which will get called early in the package creation phase. In standard make we can make this target depend on "stamp" files for each version of python we need to support. For example

    configure/python-packagename:: $(addprefix configure-stamp-, $(PY_VERSIONS))

    which expands to something like

    configure/python-packagename:: configure-stamp-2.3 configure-stamp-2.4 configure-stamp-2.5
  4. Now we make a implicit rule for anything called configure-stamp-% which will setup separate build directories for the different Python versions.

            mkdir build-$*
            cd build-$* && PYTHON=`which $*` $(DEB_CONFIGURE_SCRIPT_ENV) \
                $(DEB_CONFIGURE_SCRIPT) \
                    $(DEB_CONFIGURE_NORMAL_ARGS) \
                    --disable-maintainer-mode \
                    $(cdbs_configure_flags) \
                    $(DEB_CONFIGURE_EXTRA_FLAGS) \
            touch $@

    You may want to tweak this, and it may be a bit fragile if CDBS decides to change the names of things.

  5. Now we have our extra versions configured, we have to organise actually building them. Do the same thing to create some build stamps, and in the implicit build-stamp rule change to the appropriate build directory.

            make -C build-$*
            touch $@
    build/python-package:: $(addprefix build-stamp-, $(PY_VERSIONS))
  6. Do the same thing for the install step -- hopefully what happens in this stage is that each build will put its shared libraries in /usr/lib/python2.*/site-packages/. Then it is a matter of putting them in python-package.install to be copied.

            make -C build-$* install DESTDIR=$(CURDIR)/debian/tmp
            touch $@
    install/python-package:: $(addprefix install-stamp-, $(PY_VERSIONS))
  7. To setup the right variables, etc, you need to call dh_pycentral for your package on binary-install. See the Python Policy for details on what this will do, especially if you are shipping Python files which need to be pre-compiled.

  8. Finally, clean up by extending the clean rule.

            -rm -rf $(addprefix build-, $(PY_VERSIONS))
            -rm -rf $(addprefix configure-stamp-, $(PY_VERSIONS))
            -rm -rf $(addprefix build-stamp-, $(PY_VERSIONS))
            -rm -rf $(addprefix install-stamp-, $(PY_VERSIONS))

Thanks to Sebastien Bacher, who I think came up with this scheme originally for gnome-menus.

Debian usage by Architecture

There are some attempts to show Debian usage by architecture via mirror statistics. Dirk Eddelbuettel writes

Lastly, some concerns were raised about various biases from local mirrors, web caches, multiple installs and what have you. These are fair questions as they all affect how Debian is obtained, installed and updated. But for as long as we don't know why that should be different across architectures, this is not a concern for the question at hand. The concerns reflect uncertaintly about the absolute level of users, but barring additonal information (or hypotheses), they do not affect the distribution of users across architectures which is what this exercise is about in the first place.

I don't see how this can be right. For example, I use an IA64 desktop, so does one other person in my group, and we run two IA64 production servers (this isn't counting the many development boxes). These have all been pulled from apt-proxy on one machine which caches. So my updates at home on my PowerPC would be seen, but only one of the IA64 ones were (the first update to the apt-cache), even though relatively there are more users. So how can mirror statistics really show you anything?

I'd think they types of people running IA64 boxes will be at large institutions and generally updated via local mirrors somehow. They're also likely to have many boxes, rather than just one or two. So I'd seriously doubt the stats for IA64 mean much.

Debian release checklist

  1. Scan through code to ensure that it does not clag in different code as a library, etc, that may be under a different license
  2. lintian
  3. linda
  4. pbuilder