Sunday, September 6, 2015

Upgrading Fedora Linux 20 (Heisenbug) to 21 & 22

Although I've recently switched to Ubuntu for my work, I'm still using Fedora (dual-boot with Windows 7) on my home workstation. I've been running Fedora 20 ("Heisenbug", horrible name) since mid-2014 so my installation was pretty aged - and with the release of Fedora 22 in May has been end-of-life'd since June this year. So it was definitely time for an update.

When I moved previously to F20 I did a fresh install (although I was able to keep my /home from the previous F16 installation). However this time I decided to go with an upgrade using FedUp (FEDora UPgrader), especially attractive since it promised to also update the installed packages - meaning that I wouldn't need to start over again with installing and configuring all the applications and tools that I use.

The upgrade procedure is covered pretty comprehensively in the FedUp documentation at https://fedoraproject.org/wiki/FedUp. As I was two releases adrift I decided to perform the upgrade twice (i.e. going from F20->F21->F22) as recommended. This seemed the safest route, especially given two of the significant changes introduced with each of these realeases, specifically:
  • Fedora 21 introduced the idea of "products", which can be thought of as sub-releases specifically targeted for particular user needs. When upgrading from F20, you need to specify which product sub-release you're moving to: the available options are "workstation" (which looks like the best choice for desktop use), "server" or "cloud".
  • Fedora 22 switched from the old yum package manager to a newer version called dnf ("dandified yum") - see https://fedoraproject.org/wiki/Changes/ReplaceYumWithDNF
(As an aside, another change is that from F21 the releases will no longer have names - so F21 is just "Twenty One", and F22 is "Twenty Two".)

The upgrade process for each revision was then:

1. Install the fedup package:

sudo yum install fedup

2. Perform all updates & backup your system (my preference is still for CloneZilla Live)

3. Perform a network-based upgrade (the recommended method over e.g. upgrading from an ISO image):

sudo yum update fedup fedora-release
sudo fedup --network 21 --product=workstation

(As noted earlier, for F20-to-F21, you need to specify which product line you are moving to - hence the --product option above. For F21-to-F22 the second line can be reduced to simply sudo fedup --network 22)

4. Reboot and select "System Upgrade (fedup)" from the GRUB menu, which will boot into an upgrader environment and then churn through all the packages. (Note that this can take some time: for F20-to-F21 this has a textual display and for me the screen would go black periodically during this process, however moving the mouse seemed to restore the display. For F21-to-F22 there is a nice graphical progress bar; using F1 toggled between this and the textual display.)

5. If the previous step completes successfully then the system will reboot automatically and there should be an option for the new Fedora version in the boot menu. Select this to boot into the new version and check that it looks as you expect.

At this point the documentation outlines some post-upgrade clean-up actions that need to be performed manually:

1. Rebuild the RPM database:

sudo rpm --rebuilddb
sudo yum distro-sync --setopt=deltarpm=0

2. Install and run rpmconf to identify packages that have more recent versions, and interactively decide what to do for each:

sudo yum install rpmconf
sudo rpmconf -a

(After upgrading to F22 replace the first line with
sudo dnf install rpmconf
However for me when running the subsequent step, rpmconf crashed out with an error about a missing file, so I wasn't able to complete this step.)

Overall the process was pretty painless, and after a week or so I haven't noticed any issues (other than my continued inability to get a working version of Google Chrome - but that problem predates this upgrade), with everything appearing to work at least as well as before.

Also while Fedora 22 looks very much like Fedora 20, there have been some visual tweaks which I feel make it a bit nicer to look at than before (however this is perhaps a personal thing - see Chris Duckett's A month with Fedora 22 leaves me hungry for 23 for a counter-view), and for now I'm enjoying my "refreshed" Fedora experience.

Wednesday, August 26, 2015

Checking the RAID level with MegaCLI on Ubuntu 14.04

I recently installed Ubuntu 14.04 on a new PC (Dell Precision Tower 7910) and wanted to check the RAID configuration that the system had been set up to use. Unfortunately none of the BIOS or system configuration menus seemed to give me access to this information.

Luckily a colleague pointed me towards MegaCLI, a utility for obtaining information about (and troubleshooting) LSI RAID controllers. Instructions on how to install and use MegaCLI are given in this Nerdy Notes blog post: http://blog.nold.ca/2012/10/installing-megacli.html. However as this dates from 2012, doesn't cover use on Ubuntu, and doesn't include information on how to interpret the outputs from the utility, I've written up what I did below.

0. Check you're using a MegaRAID controller

Directly from Nerdy Notes: do

lspci -nn | grep RAID

and check that the output for MegaRAID, e.g.

02:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS-3 3008 [Fury] [1000:005f] (rev 02)

1. Obtain and install MegaCLI

The first problem is actually obtaining a copy of MegaCLI. According to the Nerdy Notes post, it should be available for download by searching on the LSI website, however when I tried this the download link was non-functional. Subsequently I've discovered that it can be located by searching on the Avago site (go to http://www.avagotech.com/ and enter "megacli" into the search box).

Opening the downloaded zip file reveals a set of subdirectories with versions of the utility for different OSes, with the Linux directory only holding a "noarch" RPM file. This should be straightforward to install using yum on Redhat-based systems (such as Fedora), but for Ubuntu it's necessary to extract the MegaCLI executables using the rpm2cpio utility:

rpm2cpio MegaCli-8.07.14-1.noarch.rpm | cpio -dimv

(The rpm2cpio utility can be installed via the rpm2cpio package using Synaptic or apt-get.)

This should pull out the MegaCli64 executable. Note that it doesn't need to be installed in any special location, you can run it from wherever you extracted it to using the above command, however you need to run with superuser privileges otherwise the output is blank.

2. Run MegaCli6 to probe the RAID information

To get information on the adapter and see which RAID levels the system supports, do:

sudo MegaCli64 -AdpAllInfo -aAll

and grep the output for "RAID"; this should give you output of the following form:

RAID Level Supported            : RAID0, RAID1, RAID5, RAID00, RAID10, RAID50, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span

Then to get information on which RAID level is actually being used:

sudo MegaCli64 -LDInfo -Lall -aAll

and again grep for "RAID":

RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3

This doesn't explicitly name a "standard" RAID level, but I found this ServerFault post which gives details on how to interpret the output: http://serverfault.com/questions/385796/how-to-interpret-this-output-from-megacli

In this case "Primary-5, Secondary-0, RAID Level Qualifier-3" turns out to be equivalent to RAID 5
(https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5), which is suitable for my needs - so I was able to rest a bit easier.

I'd like to acknowledge my colleague Ian Donaldson for tipping me off about MegaCLI.

Monday, August 24, 2015

Enabling correct monitor drivers on Ubuntu 14.04 LTS

Having recently installed Ubuntu 14.04 LTS on a new PC, I had some troubles configuring the display to operate correctly with a Dell 24" monitor: specifically, the monitor aspect ratio of 1920x1200 (16:10) didn't seem to be supported by the default install from the Live CD. In fact it was more than a little unstable (to say the least: for example, attempting to change the display aspect ratio rendered the display unusable several times, necessitating a complete reinstall to recover).

This appeared to be a missing driver issue, but it was difficult to find out what drivers were needed. Searching on the web for the specific problem was rather fruitless, and the many suggestions to look at xrand (for example https://wiki.ubuntu.com/X/Config/Resolution) didn't work for me either. In the end I stumbled across a post which suggested using the command:

ubuntu-drivers devices

from a terminal, which shows all devices which need drivers, and which packages apply to them. For example for my system the output looks like:

== /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0 ==
vendor  : NVIDIA Corporation
modalias : pci:v000010DEd000013BAsv000010DEsd00001097bc03sc00i00
driver  : nvidia-346-updates - distro non-free
driver  : nvidia-346 - third-party free
driver  : nvidia-340 - third-party free
driver  : nvidia-352 - third-party free
driver  : xserver-xorg-video-nouveau - distro free builtin
driver  : nvidia-349 - third-party non-free
driver  : nvidia-355 - third-party free recommended
driver  : nvidia-340-updates - distro non-free

When I ran this it marked 'nvidia-349' as the recommended driver (nb the output above is from a subsequent run of ubuntu-drivers and doesn't show the mark). This was easily installed using:

apt-get install nvidia-349

and appeared to solve my issues. Unfortunately I haven't been able to find the post again that originally suggested this, however if you are experiencing similar resolution issues then it might be worth a try before grappling with xrand.

Sunday, August 23, 2015

Experiences installing Ubuntu 14.04 LTS on UEFI system with Boot Repair

I recently set up a new desktop PC (Dell Precision Tower 7910) for my work computer. For this new machine I've decided to move away from Fedora Linux (which I've used for the past four years) and go with the latest Ubuntu LTS release (14.04 "Trusty Tahr"). (While I've enjoyed using Fedora, the relatively high turnover of releases has been a nuisance - so the thought that I'd only need to do one install over the lifespan of this machine was an attractive one.)

Although this was a new machine with Windows pre-installed, I decided to trash this and install Ubuntu over it as a single boot setup(my preference is to run Windows inside a virtual machine on the Linux box, rather than dual booting). The system was also already set up to use UEFI (Unified Extensible Firmware Interface, a replacement for traditional BIOS firmware), however according to the official Ubuntu documentation for UEFI (https://help.ubuntu.com/community/UEFI) for a single boot system either UEFI or "legacy" boot modes can be used.

So if  you don't really care which mode is used then the section on Installing Ubuntu for Single Boot with a Random Boot Mode suggests that the easiest approach is basically just try and install from the Live media, and see what happens. This was the approach that I took, installing from the Live CD and setting up my own partitioning scheme with /home separated from the rest of the system.

Initially attempting to reboot post-installation failed with no bootable devices. However it's possible to repair this quite easily and complete the installation using the Boot Repair utility. The basic recipe for obtaining and running it is summarised in the post at http://askubuntu.com/a/604623 - but unfortunately the version from the repo given (yannubuntu/boot-repair) didn't seem to work for Ubuntu 14.04 - instead it's necessary to get it from a different location (kranich/cubuntu; see http://ubuntuforums.org/showthread.php?t=2277484).

Based on the above posts, I restarted from the Live CD and selected "Try Ubuntu" to boot from the CD. Once the system was ready I opened a terminal window and did:

sudo add-apt-repository ppa:kranich/cubuntu
sudo apt-get update
sudo apt-get install -y boot-repair
sudo boot-repair

which brings up the Boot Repair GUI. I selected the "Recommended Repair" option, and once this had completed I restarted the system again without the Live CD. This time Ubuntu booted okay directly from the hard drive, and the installation was complete.

Addendum: it looks like the system ended up being installed in UEFI mode, which according to the Ubuntu docs is indicated by the existence of the /sys/firmware/efi/ directory on the hard drive.

Saturday, August 15, 2015

Keeping a GitHub fork up-to-date with the original repo

Forking is a standard way to make changes to a third party repository on GitHub. Typically a developer makes a fork (essentially a clone on the GitHub server side) of someone else's repo in order to make their own changes to the project. These changes might be for private use, or they could subsequently be submitted back to the original repo for evaluation and possible inclusion via the pull request mechanism.

However: once the fork is created, any further commits or other changes made to the original repo will not automatically be reflected in the fork. This post describes how to keep the fork up-to-date with the original repository (which is normally referred to as the upstream repo) by pulling in the changes manually via a clone of the fork, using the process described below.

The update procedure is:

0. Make a clone of your fork onto your local machine (or use one you've already made), and move into the cloned directory.

1. Add a new remote repository to the clone which points to the upstream repo. The general syntax to do this is:

git remote add upstream https://github.com/ORIGINAL_OWNER/ORIGINAL_REPO.git

A remote is simply a version of the project which is hosted on the Internet or elsewhere on the network, and a repo can have multiple remotes (use git remote -v to list all the ones that are defined; you should that origin is also a remote). Conventionally the remote used for this syncing is given the name upstream, however it could be called anything you like.

For example:

git remote add upstream https://github.com/fls-bioinformatics-core/genomics

This step only needs to be done once for any given clone.

2. Update the local clone by fetching and merging in any changes from the upstream repo, using the procedure:

git fetch upstream
git checkout master
git merge upstream/master


If you have made your changes in your own dedicated branches and avoided making commits to the fork's master branch then the upstream changes should merge cleanly without any conflicts. (It's considered best practice that changes made to the fork should always be done in a purpose-made branch; working directly in master makes it harder to make clean pull requests and can cause conflicts when trying to merge the upstream changes into your fork.)

3. Push the updated master branch back to your fork on GitHub using:

git push origin master

Once again, if you have avoided committing local changes to master then the push should be drama-free.

Finally, you will need to repeat steps 2 and 3 in order to stay up to date whenever new changes are made to the upstream repo.

Monday, December 9, 2013

Book Review: "Instant Flask Web Development" by Ron DuPlain

"Instant Flask Web Development" by Ron DuPlain (Packt Publishing http://www.packtpub.com/flask-web-development/book) is intended to be an introduction to Flask, a lightweight web application framework written in Python and based on the Werkzeug WSGI toolkit and Jinja2 template engine.

The book takes a tutorial style approach, building up an example appointment-management web application using Flask and introducing various features of the framework on the way. As the example application becomes more complicated, additional Python packages are covered which are not part of the Flask framework (for example SQLAlchemy for managing interactions with a database backend, and WTForm for handling form generation and validation) along with various Flask extensions that can be used for more complicated tasks (for example managing user logins and sessions). The final section of the book gives an overview of how to deploy the application in a production environment, using gunicorn (a Python WSGI server) and nginx.

Given its length (just short of 70 pages) the book is quite ambitious in the amount of ground that it attempts to cover, and it's quite impressive how much the author has managed to pack in whilst maintaining a light touch with the material. So while inevitably there is a limit to the level of detail that can be fitted in, there are some excellent and concise overviews of many of the topics that could act as excellent starting points for more experienced developers (for me the section on SQLAlchemy is a particular highlight). Overall the pacing of the book is also quite sprightly and conveys a sense of how quickly and easily Flask could be used to build a web application from scratch.

The flipside of the book's brevity is that it cannot possibly contain everything that a developer needs to know (although this is mitigated to some extent by extensive references to online documentation and resources). In this regard it is really more a showcase for Flask, and is best viewed as a good starting point for someone wishing to quickly get up to speed with the framework's potential. I'd also question how suitable this is for newcomers to either Python, or to web programming in general - I felt that some of the concepts and example code (for example the sudden appearance of a feature implemented using Ajax) might be a bit of a stretch for a novice. Also there are some occasional frustrating glitches in the text and example code which meant it took a bit of additional reading and debugging in places to get the example application working in practice.

In summary then: I'd recommend this book as a good starting point for developers who already have some familiarity with web application development, and who are interested in a quick introduction to the key components of Flask and how they're used - with the caveat that you will most likely have to refer to other resources to get the most out of it.

Disclosure: a free e-copy of this book was received from the publisher for review purposes; this review has also been submitted to Amazon. The opinions expressed here are entirely my own.

Friday, April 19, 2013

Software Carpentry Bootcamp Manchester

I've spent the last two days as a helper at a Software Carpentry Bootcamp held at the University of Manchester, and it's been really interesting and fun. Software Carpentry is a volunteer organisation and runs the bootcamps with the aim of helping postgraduate students and scientists become more productive by teaching them basic computing skills like program design, version control, testing, and task automation. Many of the materials are freely available online via the bootcamp's Github page: along with transcipts of some of the tutorials there are some excellent supporting materials including hints and tips on common Bash and editor commands (there's even more on the main Software Carpentry website).

The bootcamp format consisted of short tutorials alternating with hands-on practical exercises, and as a helper the main task was to support the instructors by offering assistance to participants if they found themselves stuck for some reason in the exercises. I'll admit I felt some trepidation beforehand about being a helper, as being put on the spot to debug something is very different to doing it from the relaxed privacy of my desk. However it turned out to be a both very enjoyable and very educational experience; even though I consider myself to be quiet a proficient and experienced shell and Python programmer, I learned some new things from helping the participants both with understanding some of the concepts and with getting their examples to work.

There were certainly lots of fresh insights and I learned some new things from the taught sessions too, including:
  • Bash/shell scripting: using $(...) instead of "backtick" notation to execute a command or pipeline within a shell script;
  • Version control: learning that Bitbucket now offers free private repositories (and a reminder that git push doesn't automatically push tags to the origin, for that you also need to explicitly use git push --tags);
  • Python: a reminder that slice notation [i:j] is inclusive of the first index i but exclusive of the second index j, and independently that string methods often don't play well with Unicode;
  • Testing: a reminder that writing and running tests doesn't have to impose a big overhead - good test functions can be implemented just with assert statements, and by observing a simple naming convention (i.e. put tests in a test_<module>.py file, and name test functions test_<name>), Python nose can run them automatically without any additional infrastructure.
  • Make: good to finally have an introduction to the basic mechanics of Makefiles (including targets, dependencies, automatic variables, wildcards and macros), after all these years!
As a helper I really enjoyed the bootcamp, and from the very positive comments made by the participants both during and at the end it sounded like everyone got something valuable from the two days - largely due to the efforts of instructors Mike Jackson, David Jones and Aleksandra Pawlik, who worked extremely hard to deliver excellent tutorials and thoroughly deserved the applause they received at the end. (Kudos should also go to Casey Bergman and Carole Goble for acting as local organisers and bringing the bootcamp to the university in the first place.) Ultimately the workshop isn't about turning researchers into software engineers but rather getting them started with practices and tools that will support their research efforts, in the same way that good laboratory practices support experimental research. (This isn't an abstract issue, there can be very real consequences as demonstrated by cases of Geoffrey Chang, McKitrick and Michaels, and the Ariane 5 rocket failure - the latter resulting in a very real "crash".)

If any of this sounds interesting to you then the Software Carpentry bootcamp calendar shows future events planned in both Europe and the US, so it's worth a look to see if there's one coming up near your location. Otherwise you could consider hosting or running your own bootcamp. Either way I'd very much recommend taking part to any researchers who want to make a positive impact on their work with software.