w: www.meantime.co.uk
t: 01539 737 766

Wednesday 30 September 2009

31 flavours of testing

So, as promised, here is the post about the different types of testing. Maybe there aren't thirty-one but there are perhaps more than you might expect, although I think all the ones detailed here will make absolute sense to you.

The first three derive directly from the documentation that is used for large projects but which also has its analogues in some smaller applications. The relevant pieces of documentation are: the business requirements document; the system specification; and the technical specification. I will talk briefly about each of these in turn:

Business requirements document: As its name implies, this document describes the user's requirements. It is a scoping document for the project and describes, at a high level, all of the processing and functionality that is required.

System specification: This describes in more detail how the system will be built, perhaps including screen mock-ups, business rules and a database schema.

Technical specification: This is the document that is used by the developer. It will include specific rules around input validations, how the database should be updated and so forth.

The three main branches of testing - certainly the three that are most commonly used - correspond to each of these three documents. (Sometimes they are shown as a "V model" with the documents on the left arm and the corresponding testing on the right.) These tests are:

Developer testing (sometimes known as 'unit' testing): as its name suggests, this is testing that is carried out by the developer. This is the most basic form of testing yet, in many respects, the most important. Indeed, there is an old IT adage that says that bugs are ten times more expensive to fix for each step down the testing path they remain undetected (not least because of the regression testing involved (see below)).

At this stage the developer should test all the screen validations and business rules relating to the screens on which s/he is working. They should also check that the reads and writes from and to the underlying database are working correctly.

It's worth re-emphasing the point that any issues missed at this stage will slow down later stages when they are discovered, returned to the developer for fixing and then the testing is repeated.

System testing: Once the developer testing is complete, then the entire system can be pulled together for an end to end test. This form of testing is more scenario based and runs along the same journeys that will be used in production. So, for example, testing an e-commerce appliaction, a system tester would add products and prices, then pose as a customer and make purchases and then ensure that the orders are recorded and stock levels are amended correctly.

User acceptance test: I have blogged about this in some detail before, so I will just say that this is the testing where the user can ensure that what has been delivered matches the brief.

So, if those are the most common forms of testing, what other types might you come across? I have described half a dozen others, below:

Implementation testing: moving code and database changes through test environments needs to be a closely managed process but the move into a live environment can be slightly different and therefore needs separate testing. So, for example, in an e-commerce application, the live transactions to the bank will only run once the software is live. This means this part of the process can only be tested in the production environment.

Regression testing: A test cycle - i.e. a series of related tests - is invalidated as soon as any of the components that were tested is changed. Of course, sometimes it is necessary to change components - if a bug has been found or if the user requests a change to the process - and then the affected test cycles need to be re-run and this is called regression testing.

Volume (or 'bulk') testing: As more and more data is added to a system, so performance begins to change: database response times may slow, screen load times can be affected and lists may become unmanageable. Sometimes these issues can be managed as a system grows but if a change is being released to a large, existing customer base, then it is essential to test against high volumes of data.

Load testing: this is related to volume testing (indeed, the two are sometimes combined for operational acceptance testing or OAT). Load testing involves having many, many users accessing the system simultaneously. This can be difficult to simulate and there are specific tools - such as Astra Load Tester - that can be used (at some expense!).

Automated testing: Sometimes the same test cycles need to be repeated over and over again. An example would be testing a web application against many different operating systems and browsers. There is a high overhead to automated testing; test scripts must be changed to mirror any system changes and, of course, the scripts need testing themselves. However, it does have its place.

Using reports for testing: Sometimes a system can, in part, be used to test itself. If a system has a decent reporting function, then that can be used to check that the system is correctly recording its own activity. So, if the testing started off with twenty widgets in stock and seven have been 'sold' in the tests, then the stock report should show thirteen left. If it doesn't, then either the system or the report needs debugging.

Part of the skill of testing is understanding the appropriate tests for an application: a simple testing might not need a separate system test for example. However, two types of testing should always take place before a change is put live: the testing done by the development team and the UAT carried out by the client.

No comments:

Post a Comment