Monday, December 9, 2013

Book Review: "Instant Flask Web Development" by Ron DuPlain

"Instant Flask Web Development" by Ron DuPlain (Packt Publishing http://www.packtpub.com/flask-web-development/book) is intended to be an introduction to Flask, a lightweight web application framework written in Python and based on the Werkzeug WSGI toolkit and Jinja2 template engine.

The book takes a tutorial style approach, building up an example appointment-management web application using Flask and introducing various features of the framework on the way. As the example application becomes more complicated, additional Python packages are covered which are not part of the Flask framework (for example SQLAlchemy for managing interactions with a database backend, and WTForm for handling form generation and validation) along with various Flask extensions that can be used for more complicated tasks (for example managing user logins and sessions). The final section of the book gives an overview of how to deploy the application in a production environment, using gunicorn (a Python WSGI server) and nginx.

Given its length (just short of 70 pages) the book is quite ambitious in the amount of ground that it attempts to cover, and it's quite impressive how much the author has managed to pack in whilst maintaining a light touch with the material. So while inevitably there is a limit to the level of detail that can be fitted in, there are some excellent and concise overviews of many of the topics that could act as excellent starting points for more experienced developers (for me the section on SQLAlchemy is a particular highlight). Overall the pacing of the book is also quite sprightly and conveys a sense of how quickly and easily Flask could be used to build a web application from scratch.

The flipside of the book's brevity is that it cannot possibly contain everything that a developer needs to know (although this is mitigated to some extent by extensive references to online documentation and resources). In this regard it is really more a showcase for Flask, and is best viewed as a good starting point for someone wishing to quickly get up to speed with the framework's potential. I'd also question how suitable this is for newcomers to either Python, or to web programming in general - I felt that some of the concepts and example code (for example the sudden appearance of a feature implemented using Ajax) might be a bit of a stretch for a novice. Also there are some occasional frustrating glitches in the text and example code which meant it took a bit of additional reading and debugging in places to get the example application working in practice.

In summary then: I'd recommend this book as a good starting point for developers who already have some familiarity with web application development, and who are interested in a quick introduction to the key components of Flask and how they're used - with the caveat that you will most likely have to refer to other resources to get the most out of it.

Disclosure: a free e-copy of this book was received from the publisher for review purposes; this review has also been submitted to Amazon. The opinions expressed here are entirely my own.

Friday, April 19, 2013

Software Carpentry Bootcamp Manchester

I've spent the last two days as a helper at a Software Carpentry Bootcamp held at the University of Manchester, and it's been really interesting and fun. Software Carpentry is a volunteer organisation and runs the bootcamps with the aim of helping postgraduate students and scientists become more productive by teaching them basic computing skills like program design, version control, testing, and task automation. Many of the materials are freely available online via the bootcamp's Github page: along with transcipts of some of the tutorials there are some excellent supporting materials including hints and tips on common Bash and editor commands (there's even more on the main Software Carpentry website).

The bootcamp format consisted of short tutorials alternating with hands-on practical exercises, and as a helper the main task was to support the instructors by offering assistance to participants if they found themselves stuck for some reason in the exercises. I'll admit I felt some trepidation beforehand about being a helper, as being put on the spot to debug something is very different to doing it from the relaxed privacy of my desk. However it turned out to be a both very enjoyable and very educational experience; even though I consider myself to be quiet a proficient and experienced shell and Python programmer, I learned some new things from helping the participants both with understanding some of the concepts and with getting their examples to work.

There were certainly lots of fresh insights and I learned some new things from the taught sessions too, including:
  • Bash/shell scripting: using $(...) instead of "backtick" notation to execute a command or pipeline within a shell script;
  • Version control: learning that Bitbucket now offers free private repositories (and a reminder that git push doesn't automatically push tags to the origin, for that you also need to explicitly use git push --tags);
  • Python: a reminder that slice notation [i:j] is inclusive of the first index i but exclusive of the second index j, and independently that string methods often don't play well with Unicode;
  • Testing: a reminder that writing and running tests doesn't have to impose a big overhead - good test functions can be implemented just with assert statements, and by observing a simple naming convention (i.e. put tests in a test_<module>.py file, and name test functions test_<name>), Python nose can run them automatically without any additional infrastructure.
  • Make: good to finally have an introduction to the basic mechanics of Makefiles (including targets, dependencies, automatic variables, wildcards and macros), after all these years!
As a helper I really enjoyed the bootcamp, and from the very positive comments made by the participants both during and at the end it sounded like everyone got something valuable from the two days - largely due to the efforts of instructors Mike Jackson, David Jones and Aleksandra Pawlik, who worked extremely hard to deliver excellent tutorials and thoroughly deserved the applause they received at the end. (Kudos should also go to Casey Bergman and Carole Goble for acting as local organisers and bringing the bootcamp to the university in the first place.) Ultimately the workshop isn't about turning researchers into software engineers but rather getting them started with practices and tools that will support their research efforts, in the same way that good laboratory practices support experimental research. (This isn't an abstract issue, there can be very real consequences as demonstrated by cases of Geoffrey Chang, McKitrick and Michaels, and the Ariane 5 rocket failure - the latter resulting in a very real "crash".)

If any of this sounds interesting to you then the Software Carpentry bootcamp calendar shows future events planned in both Europe and the US, so it's worth a look to see if there's one coming up near your location. Otherwise you could consider hosting or running your own bootcamp. Either way I'd very much recommend taking part to any researchers who want to make a positive impact on their work with software.

Sunday, August 19, 2012

Inline images in HTML tags with Python

I recently discovered a neat trick for embedding images within HTML documents, really useful if you've got an application where you would like the HTML files to be portable (in the sense of being moved from one location to another) and not have to rely on also moving a bunch of related image files.

Essentially the inlining is achieved by base64 encoding the data from the image file into an ASCII string, which can then be copied into the src attribute of an <img> tag with the following general syntax:

<img src="data:image/image_type;base64,base64_encoded_string" />

For example to embed a PNG image:

<img src="data:image/png;base64,iVBORw...." />

i.e. image_type is png and iVBORw... is the base64 encoded string (truncated here for readability).

If you're familiar with Python then it's straightforward to encode any file using the base64 module, e.g. (for a PNG image):

>>> import base64
>>> pngdata = base64.b64encode(open("cog.png",'rb').read())
>>> print "<img src='data:image/png;base64,%s' />" % pngdata

And here's an example inlined PNG generated using this method:


In fact this inlining is an example of the general data URI scheme for embedding data directly in HTML documents, and can also be used for example in CSS to set an inline background image - the Wikipedia entry on the "Data URI scheme" is a good place to start for more detailed information.

There are some caveats, particularly if you're interested in cross-browser compatibility: older versions of Internet Explorer (version 7 and older) don't support data URIs at all, while version 8 only supports encoded strings up to 32KB. More generally the encoded strings can be around 1/3 larger than the original images and are implicitly downloaded each time the document is refreshed; these are definitely considerations if you're concerned about bandwidth. However if these aren't issues for your application then this can be a handy trick to have in your toolbox.

Update 24/10/2012: another caveat I've discovered since is that at least some command line HTML-to-PDF converters (for example wkhtmltopdf) aren't able to convert the encoded images, so it's worth bearing this in mind if you plan to use them. (On the other hand the PDF conversion in Firefox - via "Print to file" - works fine but can't be run from the command line AFAIK.)

Sunday, April 22, 2012

Installing Fedora 16 (Verne) dual-boot with Windows 7

Back in February I finally got around to installing Fedora 16 (Verne) as a dual-boot option on my Windows 7 home machine. I was helped considerably by an impressively detailed tutorial on How to dual-boot Fedora 15 and Windows 7 posted on the LinuxBSDos.com blog, which was invaluable for the tricky steps of partitioning and boot loader set up.

In this post I give an overview of what I did for my own dual-boot installation, with some hints and tips that I picked up in the process.

Preparation: back up the existing system

My first step was to make an image of the existing system on an external hard drive so that I could recover everything if the installation should fail for any reason. To do this I used Clonezilla Live, which can be burned onto a CD or other bootable media (I used the 1.2.12-10 AMD64 iso). I found the Clonezilla website quite difficult to navigate but there are very detailed usage instructions at http://www.clonezilla.org/clonezilla-live-doc.php, and while Clonezilla Live's menu-driven structure might feel a bit retro to those used to a slicker UI, the program itself turns out to be very easy to use.

Note that the size of the resulting image file (and hence the amount of space required on the external drive) depends on how much of the disk actually contains data, rather than the size of the disk or partition being imaged. So although the Windows partitions had been allocated the whole of my 1 TB hard drive, in this case only about 70 GB was in use and the corresponding image file was even smaller at around 45 GB.

Repartitioning: create space for the Fedora installation

Having created the image the next step was to free up some space by shrinking the size allocated to the Windows partitions (the LinuxBSDos dual-booting tutorial assumes there's already some free space available for the Fedora install). To do this I used GParted Live (version 0.11.0-10), a free partition editor that runs from bootable media and works with 64-bit systems. Launching the GParted application from the bootable CD is covered at http://gparted.sourceforge.net/display-doc.php?name=gparted-live-manual and there is comprehensive documentation on how to use it (with screenshots) at gparted.sourceforge.net/display-doc.php?name=help-manual. However I think the user interface is pretty intuitive on its own. For my installation I reduced the Windows allocation down to 50% of the disk space, leaving around 500 GB of unallocated space.

Installing Fedora 16

I was now in a position to install Fedora into the free space. I downloaded and burned the Fedora 16 installable live media (aka Live Desktop CD) from http://fedoraproject.org/en/get-fedora (make sure you get the appropriate 32- or 64-bit version for your system), which also allows you to try out Fedora without installing (see my previous post on Fedora 15 and Gnome 3 user basics if you're unfamiliar with the Gnome 3 desktop).

Detailed information about the installation procedure can be found in the extensive Fedora 16 Installation Guide but in summary this is what I did:
  • Booted the system from the Fedora Live Desktop CD and started the installer via Activies/Applications/Install to Hard Drive.
  • Clicked through the installer screens and set up keyboard layout, computer name, network setup, computer name, and timezone etc (generally if you're installing a vanilla home desktop system then the default options should be okay) until I reached the Type of installation screen. Here I needed to tell the installer where to put Fedora:
    • Choose the Use Free Space option to use the unallocated disk space, and
    • Make sure the Review and modify partitioning layout option is checked.
    before clicking through to the next screen.
  • The next step was to modify the assigned logical volumes to my preferred layout (here the detailed instructions in the LinuxBSDos blog post were invaluable; I strongly recommend consulting them if you're unfamiliar with logical volumes and partitions). For my installation I changed the Fedora defaults to set up the following new partitions:
    • lv_root (/): 10 GB
    • lv_home:
      • /home: 75 GB
      • /boot: 500 MB
    • lv_swap: 16 GB
    Generally it's recommended to only use the space you need for the new installation and leave unused capacity unallocated; keeping /home separate from the root partition means that I can share user directories between multiple Linux installations in future; and the recommendation is that swap should be allocated around twice the size of the system RAM. However more elaborate partitioning schemes can be created, with various pros and cons - there's useful information on this in the Recommended Partitioning Scheme section of the Fedora documentation.
  • Having set up the partition layout, I then needed to change the location of the Fedora boot loader (GRUB) from the default of Master Boot Record to First sector of boot partition. The reason for this is that any changes that are made to the Master Boot Record (MBR) (such as installing GRUB) are liable to be overwritten by future Windows updates, so it's better to manage the boot options from within Windows after Fedora has been installed:
    • Click Change Device and select First sector of boot partition (I'd recommend verifying that the correct location is shown before clicking Next to complete the installation, and also making a note of the boot loader location for reference later).
At this point I was finished and the installation could complete, which was a relatively quick process (in my case it took around 5 minutes).

Configuring the Windows boot manager

Once the installation process had completed the computer rebooted directly into Windows, so to be able to access Fedora I needed to update the Windows boot manager: for this I used EasyBCD 2.1.2 (EasyBCD is a Windows bootloader editor freely available from NeoSmart Technologies) as recommended in the LinuxBSDos.com tutorial. Once installed in Windows I was able to add a new entry for Fedora 16:
  • Click on the Add New Entry tab
  • Select the Linux/BSD tab and select the partition that the GRUB boot loader was installed to (which I made a note of earlier)
  • Select Edit Boot Menu to view the change
Then restarting the system then gave me the option of booting either Windows or Fedora 16.

(Note that using this method, when Fedora 16 is selected you get the Linux GRUB boot loader rather than just booting directly into Linux; however it should be possible to configure GRUB to boot Fedora immediately.)

Post-installation Fedora set-up

Once Fedora 16 was booted for the first time, there were a few standard post-installation steps to complete: agreeing to the user license, setting up the date and time - usually I select the option to synchronise over the network - and creating a non-root user account. I also ran the Software Update application to pull in any updates (this also fixed the settings for my monitor, which initially weren't properly configured). It was then possible to start building up the software inventory and configuring the system to my personal preferences.

The final thing I did was to make donations to the Clonezilla, GParted and EasyBCD projects, as I would have been unable to set up my dual-boot system without them. It's also worth taking a moment to acknowledge the efforts of the Fedora community and the wider Linux community for making a complete operating system available and easy to install at no cost to the end user.

For my part I hope that this overview will be useful to someone else - and good luck with your own dual-booting installations.

Wednesday, November 30, 2011

Firefox add-ons for the occasional web developer

I'm not a hardcore web developer but I do some occasional web-based work, and one of the issues I have is that - because web applications exist in an environment which spans both browser and server (and which often seems to hide the workings of its components) - it can be quite difficult to see under the hood when there are problems.

Fortunately there are a number of adds-on for Firefox (my browser of choice) that can help. These are the ones that I like to use:
  • Firebug http://getfirebug.com/ is possibly one of the most essential add-ons for web development. I first came across it as a Javascript logger and debugger, but it's far more than that: describing itself as a complete web development tool, its functionality extends to HTML and CSS in additional to script profiling and network analysis capabilities. As an occasional user I've found the Javascript debugging functions invaluable, and the ability to edit CSS in-place and see the results immediately has also been really helpful in debugging style sheets - in fact its biggest downside from my perspective is that it's not immediately obvious how to use many of its functions.
  • Live HTTP Headers http://livehttpheaders.mozdev.org/ is a great tool for exposing the interactions between your browser and a server. I found this invaluable when I was debugging some website functionality that I was developing earlier this year, as it enabled me to follow a redirect that I'm sure I couldn't have seen otherwise.
  • QuickJava http://quickjavaplugin.blogspot.com/ is a utility that allows support for Java, Javascript, Flash, Silverlight, images and others to be toggled on and off within your browser, enabling you to check how a page behaves when viewed by someone who doesn't have these enabled.
  • I really like the HTML Validator http://users.skynet.be/mgueury/mozilla/ for ensuring that my HTML markup is actually W3C compliant; the main issue with this is that it's only available for Windows platforms. Provided you have the "Add-on Bar" visible in Firefox (toggle via "Options" in the Firefox main menu, or do ctrl+/), this displays a little icon at the bottom of the screen indicating the goodness or otherwise of your markup.

There are a few other useful add-ons for working with design elements like colours and images:

  • Colorzilla http://www.colorzilla.com/ is a tool that allows you (among other things) to pick colours from the current webpage and get the corresponding hex codes or RGB values.
  • Measureit http://frayd.us/ creates a ruler that lets you measure the size of page elements in pixels - particularly helpful when sizing images for web display.
  • In the past I've found the in-browser screen capture utility Fireshot http://screenshot-program.com/fireshot/ quite handy for taking screenshots of an entire webpage including the off-screen portions. I have to admit I haven't used it for a while though. There's a paid "pro" version which offers a lot of additional functionality.
Although I've given URLs, the easiest way to install any of these is via the "Get Add-ons" tab accessed via the "Add-ons" option in Firefox's main menu (I'm using Firefox 8.0 at the time of writing). Once installed the individual add-ons appear in various places, for example by default Firebug's icon can be found at the top-right hand corner. If an add-on's icon doesn't appear automatically (as seems to happen for Measureit) then you might have to add it manually: go to "Options"/"Toolbar layout", locate the item and drag it to the toolbar.

I wouldn't try to argue that this is definitive list, but for an occasional user like as myself these tools work well and (with the exception of Firebug) are easy to remember how to use even after several months away from them. However if these don't meet your needs then I'd recommend checking out the "Web Development" category of Mozilla's add-ons site for many more options.

Sunday, November 13, 2011

Creative Commons overview

A while ago I came across an interesting overview of the Creative Commons licence for digital content by Jude Umeh in the BCS "IT Now" newsletter ("Flexible Copyright", also available via the BCS website at http://www.bcs.org/content/conBlogPost/1828 as "Creative Commons: Addressing the perils of re-using digital content"), which I felt gave a very clear and concise introduction to the problem that Creative Commons (CC) is trying to solve, how it works in practice, and some of the limitations.

Essentially, anyone who creates online content - whether a piece of writing (such as this blog), an image (such as a photo in my Flickr stream), or any other kind of media - automatically has "all rights reserved" copyright on that content. This default position means that the only way someone else can (legally) re-use that content is by explicitly seeking and obtaining the copyright owner's permission (i.e. a licence) to do so. As you might imagine this can present a significant barrier to re-using online content.

The aim of the Creative Commons is to enable content creators to easily pre-emptively grant permissions for others to re-use their work, by providing a set of free licences which bridge the gap between the "all rights reserved" position (where the copyright owner retains all rights on their work) and "public domain" (where the copyright owner gives up those rights, and allows anyone to re-use their work in any way and for any purpose).

These licences are intended to be easily understood and provide a graduated scale of permissiveness. According to the article the six most common are:

  • BY ("By Attribution"): this is the most permissive, as it grants permission to reuse the original work for any purpose - including making "derived works" - with no restrictions other than that it must attributed to the original author.
  • BY-SA ("By Attribution-Share Alike"): the same as BY, with the additional restriction that any derived work must also be licensed as BY-SA.
  • BY-ND ("By Attribution-No Derivatives"): the original work can be freely used and shared with attribution, but derivative works are not allowed.
  • BY-NC ("By Attribution-Non-Commerical"): as with BY, the original work can be used, shared and used in derived works, provided attribution is made to the original author; however the original work cannot be used for commercial purposes.
  • BY-NC-SA ("By Attribution-Non-Commercial-Share Alike"): similar to BY-SA, so any derived work must use the same BY-NC-SA licence, and like BY-NC, in that commercial use of the original work is not permitted.
  • BY-NC-ND ("By Attribution-Non-Commercial-No Derivatives"): the most restrictive licence (short of "all rights reserved"), as this only allows re-use of the original work for non-commercial purposes, and doesn't permit derivative works to be made. Umeh states that BY-NC-ND is "often regarded as a 'free advertising' licence".

As Umeh points out, "CC is not a silver bullet", and his article cites examples of some of its limitations and potential pitfalls. Elsewhere I've also come across some criticisms of using the non-commercial CC licences in certain contexts: for example, the scientist Peter Murray Rust has blogged about what he sees as the negative impact of CC-NC licensing in science and teaching (see "Suboptimal/missing Open Licences by Wiley and Royal Society" http://blogs.ch.cam.ac.uk/pmr/2011/10/27/suboptimalmissing-open-licences-by-wiley-and-royal-society/ and "Why you and I should avoid NC licences" http://blogs.ch.cam.ac.uk/pmr/2010/12/17/why-i-and-you-should-avoid-nc-licences/).

However it's arguable that these are special cases, and that more generally CC-based licensing has a significant and positive impact on enabling the legal re-use of online material that would otherwise not be possible: indeed, even the posts cited above only criticise its NC aspects, and otherwise see the CC as greatly beneficial. Certainly it's worth investigating if you're interested in allowing others to reuse digital content that you've produced (there's even a page on the CC website to help choose the appropriate CC licence based on answers to plain English questions: http://creativecommons.org/choose/).

As I'm not an expert on CC (or indeed on copyright law or content licensing), I'd recommend Umeh's article as the next step for a more comprehensive and expert overview; and beyond that of course more information can be found at the Creative Commons website http://www.creativecommons.org/ (with the UK-specific version due to become available at http://www.creativecommons.org.uk later this month).

Monday, October 24, 2011

Day Camp 4 Developers: Project Management

Just over three of weeks ago I attended the third online Day Camp 4 Developers event, which this time focused on the subject of project management. The DC4D events are aimed at filling the "soft skills" gap that software developers can suffer from, and rather than being a "how-to" on project management (arguably there are already plenty of other places you can learn the basics) the six speakers covered a range of topics around the subject - some of which I wouldn't initially have thought of in this context (for example, dealing with difficult people). However as one of the speakers noted, fundamentally project management is as much about people as it is about process, and all of them delivered some interesting insights.

The first talk (which unfortunately I missed the start of through my own disorganisation) by Brian Prince about "Hands-on Agile Practices" covered the practical implementation of Agile process in a lot of detail. I've never worked with Agile myself, but I have read a bit about it in the past and Brian's presentation reminded me of a few Agile concepts that sound like they could be usefully adopted elsewhere. For example: using "yesterday's weather" (i.e. statistically the weather tomorrow is likely to be the same as today's) as a way to plan ahead by considering recent performance; and the guidelines for keeping stand-up meetings concise could also be applied to any sort status meeting (each person covers the three points "what I did yesterday", "what I'm doing today and when it will be done", and "what issues I have"). The idea of focusing on "not enough time" rather than "too much to do" also appealed to me.

The next presentation "Dealing with Difficult People" by Elizabeth Naramore turned out to be an expected delight. Starting by asking what makes people you know "difficult" to interact with, she identified four broad types of behaviour:
  • "Get it done"-types are focused on getting information and acting on it quickly, so their style is terse and to-the-point,
  • "Get it right"-types are focused on detail, so their style is precise, slow and deliberate,
  • "Get along"-types are focused on making sure others are happy, so their style is touchy-feely and often sugar-coated, and
  • "Get a pat on the back"-types are focused on getting their efforts recognised by others, so their style is more "person-oriented".
The labels are instantly memorable and straight away I'm sure we can all think of people that we know who fit these categories (as well as ourselves of course). Elizabeth was at pains to point out that most people are a mixture of two or more, and that none of them are bad (except when someone is operating at an extreme all of the time). The important point is that they affect how people communicate, so if you can recognise and adapt to other people's styles, and learn to listen to them, then you'll stand a better chance of reducing your difficult interactions.

Rob Allen was next up with "Getting A Website Out Of The Door" (subtitled "Managing a website project"), and covered the process used at Big Room Internet for spec'ing, developing and delivering website projects for external clients. Rob included a lot of detail on each part of the process, what can go wrong, and how they aim to manage and reduce the risks of that happening. One specific aspect that I found interesting was the change control procedure, which is used for all change requests from their clients regardless of the size of the change, essentially:
  • Write down the request
  • Understand the impact
  • Decide whether to do it
  • Do the work!
I think that the second point here is key: you need to understand what the impact will be, and how much work it's really going to be (I'm sure we've all agreed at one or another to make "trivial" changes to code which have turned out in practice to be far more work than anyone first imagined). A more general point that Rob made was the importance of clear communication, particularly in emails (which should have a subject line, summary, action and deadline).

Rob was followed by Keith Casey talking about how "Project Management is More Than Todo Lists". One of the interesting aspects of Keith's talk was that he brought an open source perspective to the subject. In open source projects the contributors are unpaid and so understanding how their motivations differ from doing work paid work is important for the projects to successful: as Keith said early on in his talk, in this case "it's about people".

He argued that people managing open source projects should pay attention to the uppermost levels of Maslow's Hierarchy of Needs (where the individual focuses on "esteem" and "self-actualisation"), but there was also a lot of practical advice: for example, having regular and predictable releases; ensuring that bugs and feature requests are prioritised regularly, and that developements should be driven input and involvement by the community. I particularly liked the practical suggestion that frequently asked questions can be used to identify areas of friction that need to be simplified or clarified. He also recommended Karl Fogel's book "Producing Open Source Software", which looks like it would be a good read.

Thursday Bram's presentation "Project Management for Freelancers" was another change of direction (and certainly the subtitle "How Freelancers Can Use Project Management to Make Clients Happier than They've Ever Been Before" didn't lack ambition). She suggested that for freelancers, project management is at least in part about helping clients to recognise quality work - after all they're not experts in coding (that's why they hired you), so inevitably they have an issue with knowing "what does quality look like?". (If you've ever paid for a service such as car servicing or plumbing then I'm sure you can relate to this.) So arguably one function of project management is to provide a way to communicate the quality of your work. The key message that I took away from Thursday's talk was that "what makes people happy is 'seeing progress' on their projects". Again I felt this was an idea that I could use in my (non-freelancer) work environment.

The last session of the day was Paul M. Jones talking about "Estimating and Expectations". Essentially we (i.e. everyone) are terrible at making estimates, as illustrated by his "laws of scheduling and estimates":
  • Jones' Law: "if you plan for the worst, then all surprises are good surprises"
  • Hofstadter's Law: "it always takes longer than you expect - even when you take Hofstadter's Law into account"
  • Brooks' Law: "adding more people to a late software project will make it even later"
However there are various strategies and methods we can use to try and make our estimates better: for example, using historical data and doing some design work up front can both provide valuable knowledge for improved estimates. In this context Paul also had my favourite quote of the day: "It's not enough to be smart; you actually have to know things" (something that I think a lot of genuinely clever people can often forget, especially when they move into a domain that's new to them).

It felt like Paul packed an immense amount of material into this talk, covering a wide range of different areas and offering a lot of practical advice drawn from various sources (Steve McConnell's Code Complete, Fred Brooks' The Mythical Man Month and Tom DeMarco and Timothy Lister's Peopleware: Productive Projects & Teams were all mentioned) both for estimation techniques and for expectation management - where ultimately communication and trust are key (a message that seemed to be repeated throughout the day).

In spite of a few minor technical issues (the organisers had opted to use a new service called Fuzemeeting, which I guess was still ironing out some wrinkles), overall everything ran smoothly, and at the end I felt I'd got some useful ideas that I feel I can apply in my own working life - in the end surely that's the whole idea. It was definitely worth a few hours of my weekend, and I'm looking forward to being able to see some of the talks again when the videocasts become available. In the meantime if any of this sounds interesting to you then I'd recommend checking out the DC4D website and watching out for the next event!