w: www.meantime.co.uk
t: 01539 737 766

Tuesday 15 September 2009

What successful testing looks like.

If you were feeling reckless (or just had a lot of time to fill) you might ask me where I.T. goes wrong. Amongst the many theories and anecdotes that would ensue, I believe there is one theme that would crop up repeatedly to such an extent that ultimately if someone were, in turn, to ask you for your thoughts on the source of I.T.'s failings, you would say "testing". (Or, perhaps, "would you mind if we talked about something else".)

On Saturday I met up with a man for whom I used to work at the Royal Bank of Scotland Group. We had both been successful test managers - indeed, Cameron still is - but we agreed that ultimately you only need three things to achieve that success: common sense; tenacity; and no desire to be popular.

Testing is a simple science. In my next blog I will talk about the different types of testing but, at a high level, testing is just about making sure that the software works the way it is supposed to. What could be simpler? you might reasonably ask and the answer is not much. Assuming you have good communication with your client or, better yet, a decent system specification document, you will know what you are building and what the system is supposed to do and, therefore, what you need to test.

For a simple website, that will mean making sure that there are no mistakes in the text, that the images all load and that the links work, that your SEO is in place and that your site conforms to standards around accessibility and W3C. For an e-commerce site you will, amongst other things, check the product search, that items can be added to the basket, that VAT is calculated correctly, that your secure certificate is in place and that a customer can actually complete a transaction. And so on.

So if testing is that simple, and it is, how on earth has that ended up being a prominent and consistent a factor in I.T. failures?

Firstly, I think, testing has never been a strong feature in I.T. I was on my fourth I.T. role when I encountered my first colleague whose job was, specifically, testing. He introduced himself as a system tester to universal bemusement. A few years later, in 1995, in fact, I worked with a chap called Graham Bradford who speculated that testing could become big business. I don't think Graham had anticipated the Millennium Bug but he was absolutely right.

By coincidence, I made an uncertain move from systems analysis into testing the following year when IBM interviewed me for the wrong job but gave it to me anyway. In the early days I was delighted to find that I was apparently being paid for old rope - for my common sense, in fact - but I quickly learnt where a test manager proves his worth. Time and again I have seen the time set aside for testing on a project plan effectively viewed as contingency. As development delivery dates slip, the go live date does not and the element that suffers is testing.

And this is where the tenacity comes in: if, as a tester, you are told that you are going to receive the new system for testing two weeks later than planned, then you need to make sure that the live date is pushed back by two weeks, so that you don't lose the time you need. Project managers do not like this. And that's why you need to be prepared to be unpopular.

Furthermore, you need to insist that any change that is applied to a system needs to be tested. A few years ago I was working for a company who wanted to make some "simple changes" to the website. We had a lot of clients using the site for financial transactions and I insisted that any changes needed to be tested. I was told that the changes were not functional and that there was no need for testing. I dug my heels in and told the project manager that whilst I couldn't stop the changes going live, I certainly wouldn't sign them off. Eventually, with an attendant dip in my popularity ratings, the testing was authorised.

Lo and behold, we found some issues. There was no ticker tape parade and no bounce in my popularity, just a few gripes about how I couldn't have known there would be any bugs. Well, of course I couldn't. But at least the next time a "simple change" came along, I had a good argument to fall back on.

So, what does successful testing look like? Well, in detail, it looks like a test strategy and test plans and test scripts and good test data and a strong test team. But most importantly it looks like a decent amount of time allocated to testing and for that time to be guarded jealously and not squandered to compensate for other problems on a project.

In conclusion, then, poorly tested projects result in dissatisfied users and lost customers. They require lots of support once they are live and consequently have a continuous and unpopular ongoing cost. Successful projects stick to their plans, including testing, are candid about their slippage when it occurs and ensure that when the system does 'go live' it works as it was intended to. A poorly tested product offers a constant reminder of the problems with I.T. A successful project invites further development and, as software developers, that is the goal we should be pursuing.

No comments:

Post a Comment