Wednesday, August 26, 2015

Checking the RAID level with MegaCLI on Ubuntu 14.04

I recently installed Ubuntu 14.04 on a new PC (Dell Precision Tower 7910) and wanted to check the RAID configuration that the system had been set up to use. Unfortunately none of the BIOS or system configuration menus seemed to give me access to this information.

Luckily a colleague pointed me towards MegaCLI, a utility for obtaining information about (and troubleshooting) LSI RAID controllers. Instructions on how to install and use MegaCLI are given in this Nerdy Notes blog post: http://blog.nold.ca/2012/10/installing-megacli.html. However as this dates from 2012, doesn't cover use on Ubuntu, and doesn't include information on how to interpret the outputs from the utility, I've written up what I did below.

0. Check you're using a MegaRAID controller

Directly from Nerdy Notes: do

lspci -nn | grep RAID

and check that the output for MegaRAID, e.g.

02:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS-3 3008 [Fury] [1000:005f] (rev 02)

1. Obtain and install MegaCLI

The first problem is actually obtaining a copy of MegaCLI. According to the Nerdy Notes post, it should be available for download by searching on the LSI website, however when I tried this the download link was non-functional. Subsequently I've discovered that it can be located by searching on the Avago site (go to http://www.avagotech.com/ and enter "megacli" into the search box).

Opening the downloaded zip file reveals a set of subdirectories with versions of the utility for different OSes, with the Linux directory only holding a "noarch" RPM file. This should be straightforward to install using yum on Redhat-based systems (such as Fedora), but for Ubuntu it's necessary to extract the MegaCLI executables using the rpm2cpio utility:

rpm2cpio MegaCli-8.07.14-1.noarch.rpm | cpio -dimv

(The rpm2cpio utility can be installed via the rpm2cpio package using Synaptic or apt-get.)

This should pull out the MegaCli64 executable. Note that it doesn't need to be installed in any special location, you can run it from wherever you extracted it to using the above command, however you need to run with superuser privileges otherwise the output is blank.

2. Run MegaCli6 to probe the RAID information

To get information on the adapter and see which RAID levels the system supports, do:

sudo MegaCli64 -AdpAllInfo -aAll

and grep the output for "RAID"; this should give you output of the following form:

RAID Level Supported            : RAID0, RAID1, RAID5, RAID00, RAID10, RAID50, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span

Then to get information on which RAID level is actually being used:

sudo MegaCli64 -LDInfo -Lall -aAll

and again grep for "RAID":

RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3

This doesn't explicitly name a "standard" RAID level, but I found this ServerFault post which gives details on how to interpret the output: http://serverfault.com/questions/385796/how-to-interpret-this-output-from-megacli

In this case "Primary-5, Secondary-0, RAID Level Qualifier-3" turns out to be equivalent to RAID 5
(https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5), which is suitable for my needs - so I was able to rest a bit easier.

I'd like to acknowledge my colleague Ian Donaldson for tipping me off about MegaCLI.

Monday, August 24, 2015

Enabling correct monitor drivers on Ubuntu 14.04 LTS

Having recently installed Ubuntu 14.04 LTS on a new PC, I had some troubles configuring the display to operate correctly with a Dell 24" monitor: specifically, the monitor aspect ratio of 1920x1200 (16:10) didn't seem to be supported by the default install from the Live CD. In fact it was more than a little unstable (to say the least: for example, attempting to change the display aspect ratio rendered the display unusable several times, necessitating a complete reinstall to recover).

This appeared to be a missing driver issue, but it was difficult to find out what drivers were needed. Searching on the web for the specific problem was rather fruitless, and the many suggestions to look at xrand (for example https://wiki.ubuntu.com/X/Config/Resolution) didn't work for me either. In the end I stumbled across a post which suggested using the command:

ubuntu-drivers devices

from a terminal, which shows all devices which need drivers, and which packages apply to them. For example for my system the output looks like:

== /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0 ==
vendor  : NVIDIA Corporation
modalias : pci:v000010DEd000013BAsv000010DEsd00001097bc03sc00i00
driver  : nvidia-346-updates - distro non-free
driver  : nvidia-346 - third-party free
driver  : nvidia-340 - third-party free
driver  : nvidia-352 - third-party free
driver  : xserver-xorg-video-nouveau - distro free builtin
driver  : nvidia-349 - third-party non-free
driver  : nvidia-355 - third-party free recommended
driver  : nvidia-340-updates - distro non-free

When I ran this it marked 'nvidia-349' as the recommended driver (nb the output above is from a subsequent run of ubuntu-drivers and doesn't show the mark). This was easily installed using:

apt-get install nvidia-349

and appeared to solve my issues. Unfortunately I haven't been able to find the post again that originally suggested this, however if you are experiencing similar resolution issues then it might be worth a try before grappling with xrand.

Sunday, August 23, 2015

Experiences installing Ubuntu 14.04 LTS on UEFI system with Boot Repair

I recently set up a new desktop PC (Dell Precision Tower 7910) for my work computer. For this new machine I've decided to move away from Fedora Linux (which I've used for the past four years) and go with the latest Ubuntu LTS release (14.04 "Trusty Tahr"). (While I've enjoyed using Fedora, the relatively high turnover of releases has been a nuisance - so the thought that I'd only need to do one install over the lifespan of this machine was an attractive one.)

Although this was a new machine with Windows pre-installed, I decided to trash this and install Ubuntu over it as a single boot setup(my preference is to run Windows inside a virtual machine on the Linux box, rather than dual booting). The system was also already set up to use UEFI (Unified Extensible Firmware Interface, a replacement for traditional BIOS firmware), however according to the official Ubuntu documentation for UEFI (https://help.ubuntu.com/community/UEFI) for a single boot system either UEFI or "legacy" boot modes can be used.

So if  you don't really care which mode is used then the section on Installing Ubuntu for Single Boot with a Random Boot Mode suggests that the easiest approach is basically just try and install from the Live media, and see what happens. This was the approach that I took, installing from the Live CD and setting up my own partitioning scheme with /home separated from the rest of the system.

Initially attempting to reboot post-installation failed with no bootable devices. However it's possible to repair this quite easily and complete the installation using the Boot Repair utility. The basic recipe for obtaining and running it is summarised in the post at http://askubuntu.com/a/604623 - but unfortunately the version from the repo given (yannubuntu/boot-repair) didn't seem to work for Ubuntu 14.04 - instead it's necessary to get it from a different location (kranich/cubuntu; see http://ubuntuforums.org/showthread.php?t=2277484).

Based on the above posts, I restarted from the Live CD and selected "Try Ubuntu" to boot from the CD. Once the system was ready I opened a terminal window and did:

sudo add-apt-repository ppa:kranich/cubuntu
sudo apt-get update
sudo apt-get install -y boot-repair
sudo boot-repair

which brings up the Boot Repair GUI. I selected the "Recommended Repair" option, and once this had completed I restarted the system again without the Live CD. This time Ubuntu booted okay directly from the hard drive, and the installation was complete.

Addendum: it looks like the system ended up being installed in UEFI mode, which according to the Ubuntu docs is indicated by the existence of the /sys/firmware/efi/ directory on the hard drive.

Saturday, August 15, 2015

Keeping a GitHub fork up-to-date with the original repo

Forking is a standard way to make changes to a third party repository on GitHub. Typically a developer makes a fork (essentially a clone on the GitHub server side) of someone else's repo in order to make their own changes to the project. These changes might be for private use, or they could subsequently be submitted back to the original repo for evaluation and possible inclusion via the pull request mechanism.

However: once the fork is created, any further commits or other changes made to the original repo will not automatically be reflected in the fork. This post describes how to keep the fork up-to-date with the original repository (which is normally referred to as the upstream repo) by pulling in the changes manually via a clone of the fork, using the process described below.

The update procedure is:

0. Make a clone of your fork onto your local machine (or use one you've already made), and move into the cloned directory.

1. Add a new remote repository to the clone which points to the upstream repo. The general syntax to do this is:

git remote add upstream https://github.com/ORIGINAL_OWNER/ORIGINAL_REPO.git

A remote is simply a version of the project which is hosted on the Internet or elsewhere on the network, and a repo can have multiple remotes (use git remote -v to list all the ones that are defined; you should that origin is also a remote). Conventionally the remote used for this syncing is given the name upstream, however it could be called anything you like.

For example:

git remote add upstream https://github.com/fls-bioinformatics-core/genomics

This step only needs to be done once for any given clone.

2. Update the local clone by fetching and merging in any changes from the upstream repo, using the procedure:

git fetch upstream
git checkout master
git merge upstream/master


If you have made your changes in your own dedicated branches and avoided making commits to the fork's master branch then the upstream changes should merge cleanly without any conflicts. (It's considered best practice that changes made to the fork should always be done in a purpose-made branch; working directly in master makes it harder to make clean pull requests and can cause conflicts when trying to merge the upstream changes into your fork.)

3. Push the updated master branch back to your fork on GitHub using:

git push origin master

Once again, if you have avoided committing local changes to master then the push should be drama-free.

Finally, you will need to repeat steps 2 and 3 in order to stay up to date whenever new changes are made to the upstream repo.

Monday, December 9, 2013

Book Review: "Instant Flask Web Development" by Ron DuPlain

"Instant Flask Web Development" by Ron DuPlain (Packt Publishing http://www.packtpub.com/flask-web-development/book) is intended to be an introduction to Flask, a lightweight web application framework written in Python and based on the Werkzeug WSGI toolkit and Jinja2 template engine.

The book takes a tutorial style approach, building up an example appointment-management web application using Flask and introducing various features of the framework on the way. As the example application becomes more complicated, additional Python packages are covered which are not part of the Flask framework (for example SQLAlchemy for managing interactions with a database backend, and WTForm for handling form generation and validation) along with various Flask extensions that can be used for more complicated tasks (for example managing user logins and sessions). The final section of the book gives an overview of how to deploy the application in a production environment, using gunicorn (a Python WSGI server) and nginx.

Given its length (just short of 70 pages) the book is quite ambitious in the amount of ground that it attempts to cover, and it's quite impressive how much the author has managed to pack in whilst maintaining a light touch with the material. So while inevitably there is a limit to the level of detail that can be fitted in, there are some excellent and concise overviews of many of the topics that could act as excellent starting points for more experienced developers (for me the section on SQLAlchemy is a particular highlight). Overall the pacing of the book is also quite sprightly and conveys a sense of how quickly and easily Flask could be used to build a web application from scratch.

The flipside of the book's brevity is that it cannot possibly contain everything that a developer needs to know (although this is mitigated to some extent by extensive references to online documentation and resources). In this regard it is really more a showcase for Flask, and is best viewed as a good starting point for someone wishing to quickly get up to speed with the framework's potential. I'd also question how suitable this is for newcomers to either Python, or to web programming in general - I felt that some of the concepts and example code (for example the sudden appearance of a feature implemented using Ajax) might be a bit of a stretch for a novice. Also there are some occasional frustrating glitches in the text and example code which meant it took a bit of additional reading and debugging in places to get the example application working in practice.

In summary then: I'd recommend this book as a good starting point for developers who already have some familiarity with web application development, and who are interested in a quick introduction to the key components of Flask and how they're used - with the caveat that you will most likely have to refer to other resources to get the most out of it.

Disclosure: a free e-copy of this book was received from the publisher for review purposes; this review has also been submitted to Amazon. The opinions expressed here are entirely my own.

Friday, April 19, 2013

Software Carpentry Bootcamp Manchester

I've spent the last two days as a helper at a Software Carpentry Bootcamp held at the University of Manchester, and it's been really interesting and fun. Software Carpentry is a volunteer organisation and runs the bootcamps with the aim of helping postgraduate students and scientists become more productive by teaching them basic computing skills like program design, version control, testing, and task automation. Many of the materials are freely available online via the bootcamp's Github page: along with transcipts of some of the tutorials there are some excellent supporting materials including hints and tips on common Bash and editor commands (there's even more on the main Software Carpentry website).

The bootcamp format consisted of short tutorials alternating with hands-on practical exercises, and as a helper the main task was to support the instructors by offering assistance to participants if they found themselves stuck for some reason in the exercises. I'll admit I felt some trepidation beforehand about being a helper, as being put on the spot to debug something is very different to doing it from the relaxed privacy of my desk. However it turned out to be a both very enjoyable and very educational experience; even though I consider myself to be quiet a proficient and experienced shell and Python programmer, I learned some new things from helping the participants both with understanding some of the concepts and with getting their examples to work.

There were certainly lots of fresh insights and I learned some new things from the taught sessions too, including:
  • Bash/shell scripting: using $(...) instead of "backtick" notation to execute a command or pipeline within a shell script;
  • Version control: learning that Bitbucket now offers free private repositories (and a reminder that git push doesn't automatically push tags to the origin, for that you also need to explicitly use git push --tags);
  • Python: a reminder that slice notation [i:j] is inclusive of the first index i but exclusive of the second index j, and independently that string methods often don't play well with Unicode;
  • Testing: a reminder that writing and running tests doesn't have to impose a big overhead - good test functions can be implemented just with assert statements, and by observing a simple naming convention (i.e. put tests in a test_<module>.py file, and name test functions test_<name>), Python nose can run them automatically without any additional infrastructure.
  • Make: good to finally have an introduction to the basic mechanics of Makefiles (including targets, dependencies, automatic variables, wildcards and macros), after all these years!
As a helper I really enjoyed the bootcamp, and from the very positive comments made by the participants both during and at the end it sounded like everyone got something valuable from the two days - largely due to the efforts of instructors Mike Jackson, David Jones and Aleksandra Pawlik, who worked extremely hard to deliver excellent tutorials and thoroughly deserved the applause they received at the end. (Kudos should also go to Casey Bergman and Carole Goble for acting as local organisers and bringing the bootcamp to the university in the first place.) Ultimately the workshop isn't about turning researchers into software engineers but rather getting them started with practices and tools that will support their research efforts, in the same way that good laboratory practices support experimental research. (This isn't an abstract issue, there can be very real consequences as demonstrated by cases of Geoffrey Chang, McKitrick and Michaels, and the Ariane 5 rocket failure - the latter resulting in a very real "crash".)

If any of this sounds interesting to you then the Software Carpentry bootcamp calendar shows future events planned in both Europe and the US, so it's worth a look to see if there's one coming up near your location. Otherwise you could consider hosting or running your own bootcamp. Either way I'd very much recommend taking part to any researchers who want to make a positive impact on their work with software.

Sunday, August 19, 2012

Inline images in HTML tags with Python

I recently discovered a neat trick for embedding images within HTML documents, really useful if you've got an application where you would like the HTML files to be portable (in the sense of being moved from one location to another) and not have to rely on also moving a bunch of related image files.

Essentially the inlining is achieved by base64 encoding the data from the image file into an ASCII string, which can then be copied into the src attribute of an <img> tag with the following general syntax:

<img src="data:image/image_type;base64,base64_encoded_string" />

For example to embed a PNG image:

<img src="data:image/png;base64,iVBORw...." />

i.e. image_type is png and iVBORw... is the base64 encoded string (truncated here for readability).

If you're familiar with Python then it's straightforward to encode any file using the base64 module, e.g. (for a PNG image):

>>> import base64
>>> pngdata = base64.b64encode(open("cog.png",'rb').read())
>>> print "<img src='data:image/png;base64,%s' />" % pngdata

And here's an example inlined PNG generated using this method:


In fact this inlining is an example of the general data URI scheme for embedding data directly in HTML documents, and can also be used for example in CSS to set an inline background image - the Wikipedia entry on the "Data URI scheme" is a good place to start for more detailed information.

There are some caveats, particularly if you're interested in cross-browser compatibility: older versions of Internet Explorer (version 7 and older) don't support data URIs at all, while version 8 only supports encoded strings up to 32KB. More generally the encoded strings can be around 1/3 larger than the original images and are implicitly downloaded each time the document is refreshed; these are definitely considerations if you're concerned about bandwidth. However if these aren't issues for your application then this can be a handy trick to have in your toolbox.

Update 24/10/2012: another caveat I've discovered since is that at least some command line HTML-to-PDF converters (for example wkhtmltopdf) aren't able to convert the encoded images, so it's worth bearing this in mind if you plan to use them. (On the other hand the PDF conversion in Firefox - via "Print to file" - works fine but can't be run from the command line AFAIK.)