w: www.meantime.co.uk
t: 01539 737 766

Thursday 14 March 2013

Banking on failure

Over ten years ago I was working for a Major Retail Bank (MRB) and I was responsible for testing their first fully online Internet banking application. I had two teams of around twenty people, one in London for a sister bank and one at head office in what, at times, felt like the Arctic circle.

We had cabinet upon cabinet of manually completed test scripts as well as automated text tools checking the less popular browsers. In each location I had two or three test coordinators running the teams on an operational basis and I had an excellent boss, Cameron Mealyou, supporting me strategically. With all this in place, we delivered very high quality testing and the live launch went without a hitch.

A short time later, a minor amendment was required to the software, effectively a change to some help text. Perhaps on another, lower profiled project I might have agreed to some localised testing but this was an Internet banking system; we couldn't afford a single incident to undermine customer confidence. Thus, I proposed a full regression test, employing the full test team for a week, thereby causing a week's slippage to the delivery of the next release.

This suggestion proved to be unpopular with the programme manager and matters became heated. Having stated my case once I couldn't see the point in labouring the point, so I told the PM he was welcome to over-rule me but that I wouldn't sign off the change as tested. I think it was pretty clear that overruling me would have made my position untenable and he reluctantly agreed to the test.

The testing produced three significant bugs.

When the bugs had been corrected and the release was live, I was in the pub with the development manager, I nice chap called Neil. After a couple a beers, he said to me "You couldn't possibly have known those bugs were there." Which is an odd thing to say, isn't it? We don't test because we know there are bugs, we test in case there are (although that is usually the case).

A few months later I moved on to a new contract. I was told the test teams would be wound down "because we never have any bugs in the live environment." This is like saying "I think I'll stop exercising because I never have any problems with my fitness"!

I've been reminded of this a few times recently when I've encountered problems with my bank's banking software and also because of the high profile issues that have been reported in the press. These problems point - undeniably - to a lack of testing.

And it's not that testing is particularly complicated. It's mostly a combination of common sense, rigour and a conscientious approach (rather well suited to those who are a little OCD). It is, however, pretty expensive: there's a lot of preparation involved and the process itself is time-consuming. On many occasions I saw a project manager attempt to cut testing time in order to hit a deadline and this is a classic reason for bug-ridden live releases.

So, my message to the banks is: stop scrimping on the testing and support your test managers.

No comments:

Post a Comment