So, as promised, here is the post about the different types of testing. Maybe there aren't thirty-one but there are perhaps more than you might expect, although I think all the ones detailed here will make absolute sense to you.
The first three derive directly from the documentation that is used for large projects but which also has its analogues in some smaller applications. The relevant pieces of documentation are: the business requirements document; the system specification; and the technical specification. I will talk briefly about each of these in turn:
Business requirements document: As its name implies, this document describes the user's requirements. It is a scoping document for the project and describes, at a high level, all of the processing and functionality that is required.
System specification: This describes in more detail how the system will be built, perhaps including screen mock-ups, business rules and a database schema.
Technical specification: This is the document that is used by the developer. It will include specific rules around input validations, how the database should be updated and so forth.
The three main branches of testing - certainly the three that are most commonly used - correspond to each of these three documents. (Sometimes they are shown as a "V model" with the documents on the left arm and the corresponding testing on the right.) These tests are:
Developer testing (sometimes known as 'unit' testing): as its name suggests, this is testing that is carried out by the developer. This is the most basic form of testing yet, in many respects, the most important. Indeed, there is an old IT adage that says that bugs are ten times more expensive to fix for each step down the testing path they remain undetected (not least because of the regression testing involved (see below)).
At this stage the developer should test all the screen validations and business rules relating to the screens on which s/he is working. They should also check that the reads and writes from and to the underlying database are working correctly.
It's worth re-emphasing the point that any issues missed at this stage will slow down later stages when they are discovered, returned to the developer for fixing and then the testing is repeated.
System testing: Once the developer testing is complete, then the entire system can be pulled together for an end to end test. This form of testing is more scenario based and runs along the same journeys that will be used in production. So, for example, testing an e-commerce appliaction, a system tester would add products and prices, then pose as a customer and make purchases and then ensure that the orders are recorded and stock levels are amended correctly.
User acceptance test: I have blogged about this in some detail before, so I will just say that this is the testing where the user can ensure that what has been delivered matches the brief.
So, if those are the most common forms of testing, what other types might you come across? I have described half a dozen others, below:
Implementation testing: moving code and database changes through test environments needs to be a closely managed process but the move into a live environment can be slightly different and therefore needs separate testing. So, for example, in an e-commerce application, the live transactions to the bank will only run once the software is live. This means this part of the process can only be tested in the production environment.
Regression testing: A test cycle - i.e. a series of related tests - is invalidated as soon as any of the components that were tested is changed. Of course, sometimes it is necessary to change components - if a bug has been found or if the user requests a change to the process - and then the affected test cycles need to be re-run and this is called regression testing.
Volume (or 'bulk') testing: As more and more data is added to a system, so performance begins to change: database response times may slow, screen load times can be affected and lists may become unmanageable. Sometimes these issues can be managed as a system grows but if a change is being released to a large, existing customer base, then it is essential to test against high volumes of data.
Load testing: this is related to volume testing (indeed, the two are sometimes combined for operational acceptance testing or OAT). Load testing involves having many, many users accessing the system simultaneously. This can be difficult to simulate and there are specific tools - such as Astra Load Tester - that can be used (at some expense!).
Automated testing: Sometimes the same test cycles need to be repeated over and over again. An example would be testing a web application against many different operating systems and browsers. There is a high overhead to automated testing; test scripts must be changed to mirror any system changes and, of course, the scripts need testing themselves. However, it does have its place.
Using reports for testing: Sometimes a system can, in part, be used to test itself. If a system has a decent reporting function, then that can be used to check that the system is correctly recording its own activity. So, if the testing started off with twenty widgets in stock and seven have been 'sold' in the tests, then the stock report should show thirteen left. If it doesn't, then either the system or the report needs debugging.
Part of the skill of testing is understanding the appropriate tests for an application: a simple testing might not need a separate system test for example. However, two types of testing should always take place before a change is put live: the testing done by the development team and the UAT carried out by the client.
Bespoke software, web based applications, bespoke business systems, e-commerce websites, website design and development by Meantime Information Technologies Ltd, based in Kendal, Cumbria
Wednesday, 30 September 2009
Tuesday, 15 September 2009
What successful testing looks like.
If you were feeling reckless (or just had a lot of time to fill) you might ask me where I.T. goes wrong. Amongst the many theories and anecdotes that would ensue, I believe there is one theme that would crop up repeatedly to such an extent that ultimately if someone were, in turn, to ask you for your thoughts on the source of I.T.'s failings, you would say "testing". (Or, perhaps, "would you mind if we talked about something else".)
On Saturday I met up with a man for whom I used to work at the Royal Bank of Scotland Group. We had both been successful test managers - indeed, Cameron still is - but we agreed that ultimately you only need three things to achieve that success: common sense; tenacity; and no desire to be popular.
Testing is a simple science. In my next blog I will talk about the different types of testing but, at a high level, testing is just about making sure that the software works the way it is supposed to. What could be simpler? you might reasonably ask and the answer is not much. Assuming you have good communication with your client or, better yet, a decent system specification document, you will know what you are building and what the system is supposed to do and, therefore, what you need to test.
For a simple website, that will mean making sure that there are no mistakes in the text, that the images all load and that the links work, that your SEO is in place and that your site conforms to standards around accessibility and W3C. For an e-commerce site you will, amongst other things, check the product search, that items can be added to the basket, that VAT is calculated correctly, that your secure certificate is in place and that a customer can actually complete a transaction. And so on.
So if testing is that simple, and it is, how on earth has that ended up being a prominent and consistent a factor in I.T. failures?
Firstly, I think, testing has never been a strong feature in I.T. I was on my fourth I.T. role when I encountered my first colleague whose job was, specifically, testing. He introduced himself as a system tester to universal bemusement. A few years later, in 1995, in fact, I worked with a chap called Graham Bradford who speculated that testing could become big business. I don't think Graham had anticipated the Millennium Bug but he was absolutely right.
By coincidence, I made an uncertain move from systems analysis into testing the following year when IBM interviewed me for the wrong job but gave it to me anyway. In the early days I was delighted to find that I was apparently being paid for old rope - for my common sense, in fact - but I quickly learnt where a test manager proves his worth. Time and again I have seen the time set aside for testing on a project plan effectively viewed as contingency. As development delivery dates slip, the go live date does not and the element that suffers is testing.
And this is where the tenacity comes in: if, as a tester, you are told that you are going to receive the new system for testing two weeks later than planned, then you need to make sure that the live date is pushed back by two weeks, so that you don't lose the time you need. Project managers do not like this. And that's why you need to be prepared to be unpopular.
Furthermore, you need to insist that any change that is applied to a system needs to be tested. A few years ago I was working for a company who wanted to make some "simple changes" to the website. We had a lot of clients using the site for financial transactions and I insisted that any changes needed to be tested. I was told that the changes were not functional and that there was no need for testing. I dug my heels in and told the project manager that whilst I couldn't stop the changes going live, I certainly wouldn't sign them off. Eventually, with an attendant dip in my popularity ratings, the testing was authorised.
Lo and behold, we found some issues. There was no ticker tape parade and no bounce in my popularity, just a few gripes about how I couldn't have known there would be any bugs. Well, of course I couldn't. But at least the next time a "simple change" came along, I had a good argument to fall back on.
So, what does successful testing look like? Well, in detail, it looks like a test strategy and test plans and test scripts and good test data and a strong test team. But most importantly it looks like a decent amount of time allocated to testing and for that time to be guarded jealously and not squandered to compensate for other problems on a project.
In conclusion, then, poorly tested projects result in dissatisfied users and lost customers. They require lots of support once they are live and consequently have a continuous and unpopular ongoing cost. Successful projects stick to their plans, including testing, are candid about their slippage when it occurs and ensure that when the system does 'go live' it works as it was intended to. A poorly tested product offers a constant reminder of the problems with I.T. A successful project invites further development and, as software developers, that is the goal we should be pursuing.
On Saturday I met up with a man for whom I used to work at the Royal Bank of Scotland Group. We had both been successful test managers - indeed, Cameron still is - but we agreed that ultimately you only need three things to achieve that success: common sense; tenacity; and no desire to be popular.
Testing is a simple science. In my next blog I will talk about the different types of testing but, at a high level, testing is just about making sure that the software works the way it is supposed to. What could be simpler? you might reasonably ask and the answer is not much. Assuming you have good communication with your client or, better yet, a decent system specification document, you will know what you are building and what the system is supposed to do and, therefore, what you need to test.
For a simple website, that will mean making sure that there are no mistakes in the text, that the images all load and that the links work, that your SEO is in place and that your site conforms to standards around accessibility and W3C. For an e-commerce site you will, amongst other things, check the product search, that items can be added to the basket, that VAT is calculated correctly, that your secure certificate is in place and that a customer can actually complete a transaction. And so on.
So if testing is that simple, and it is, how on earth has that ended up being a prominent and consistent a factor in I.T. failures?
Firstly, I think, testing has never been a strong feature in I.T. I was on my fourth I.T. role when I encountered my first colleague whose job was, specifically, testing. He introduced himself as a system tester to universal bemusement. A few years later, in 1995, in fact, I worked with a chap called Graham Bradford who speculated that testing could become big business. I don't think Graham had anticipated the Millennium Bug but he was absolutely right.
By coincidence, I made an uncertain move from systems analysis into testing the following year when IBM interviewed me for the wrong job but gave it to me anyway. In the early days I was delighted to find that I was apparently being paid for old rope - for my common sense, in fact - but I quickly learnt where a test manager proves his worth. Time and again I have seen the time set aside for testing on a project plan effectively viewed as contingency. As development delivery dates slip, the go live date does not and the element that suffers is testing.
And this is where the tenacity comes in: if, as a tester, you are told that you are going to receive the new system for testing two weeks later than planned, then you need to make sure that the live date is pushed back by two weeks, so that you don't lose the time you need. Project managers do not like this. And that's why you need to be prepared to be unpopular.
Furthermore, you need to insist that any change that is applied to a system needs to be tested. A few years ago I was working for a company who wanted to make some "simple changes" to the website. We had a lot of clients using the site for financial transactions and I insisted that any changes needed to be tested. I was told that the changes were not functional and that there was no need for testing. I dug my heels in and told the project manager that whilst I couldn't stop the changes going live, I certainly wouldn't sign them off. Eventually, with an attendant dip in my popularity ratings, the testing was authorised.
Lo and behold, we found some issues. There was no ticker tape parade and no bounce in my popularity, just a few gripes about how I couldn't have known there would be any bugs. Well, of course I couldn't. But at least the next time a "simple change" came along, I had a good argument to fall back on.
So, what does successful testing look like? Well, in detail, it looks like a test strategy and test plans and test scripts and good test data and a strong test team. But most importantly it looks like a decent amount of time allocated to testing and for that time to be guarded jealously and not squandered to compensate for other problems on a project.
In conclusion, then, poorly tested projects result in dissatisfied users and lost customers. They require lots of support once they are live and consequently have a continuous and unpopular ongoing cost. Successful projects stick to their plans, including testing, are candid about their slippage when it occurs and ensure that when the system does 'go live' it works as it was intended to. A poorly tested product offers a constant reminder of the problems with I.T. A successful project invites further development and, as software developers, that is the goal we should be pursuing.
Subscribe to:
Posts (Atom)