w: www.meantime.co.uk
t: 01539 737 766

Thursday 31 December 2009

2009/2010 - what's old and what's new?

One of the benefits of hindsight is that the dust has had a chance to settle, the echoes of the hype have died away and one can see what actually happened.

So, in retrospect, what has 2009 given us? We have Windows 7 from Microsoft, the increasing ubiquity of the iPhone from Apple, increasing diversification from Google plus the new(ish) kids on the block - Facebook and Twitter - have become firmly entrenched in our lives.

I must say I am surprised at the popularity of Windows 7, not because I don't like it - I do, very much - but because, as a user, I don't see much difference from Vista, and I could never understand why that received such a bad press. But certainly for a lot of companies who have 'rested' on Windows XP, this is an opportunity to upgrade and one that shouldn't be missed; technology is not a one-off investment and needs to be 'topped up'. Hopefully, we will also see local authorities and the rest of the public sector make a long overdue move from Internet Explorer 6. Whether that is a conservative move to Internet Explorer 8 or a bolder move to Chrome of Firefox is less of a concern.

While Microsoft continues to polish its golden eggs, Apple have continued to proceed apace with the iPod/iPhone technology. I have a lot of respect for Apple - although I must confess that I find the blind devotion of many of their aficionados rather off-putting - and their established strategy of tying their software to prescribed hardware continues to pay dividends in driving technology interfaces forwards.

Google's ongoing growth is perhaps easier to define in financial terms than anything else. While the core search engine remains largely unchanged apart from its logo (the subtle '20th Anniversary of the Wallace and Gromit Characters' being my favourite from this year), we have seen a slow take up of Google docs and the guarded release of Google Wave. Whilst I admire Google's drive to diversify rather than resting on its search engine, I can't say that I see Google docs winning a huge take up from business in the short term - it's too unclear who can see what and just where the data is being held - and I've been underwhelmed by the time I've spent evaluating Wave. For business, I think time would be better spent on Microsoft's excellent - but obscure - Groove software, which is excellent for sharing documents and working collaboratively on a shared set of assets.

Facebook and Twitter both became firmly established this last year, thoroughly dividing those with exposure to them when it comes to any discussion about their use and value. Facebook is probably the most expensive diversion ever developed, having taken tens of millions in investment yet reporting a profit for the first time only late in 2009, despite a reported half a billion - one third of web users in the world! - allegedly being signed up. (It's worth remembering that AOL, valued at $162B ten years ago, on the basis of its users, is now worth around $3B.) Twitter's value is even more vague and, with no fees or advertising to support it, was no doubt very relieved by the massive cash injection from Bing and Google a couple of months ago when both search engines bought licences to Twitter's content. I must say that while I can see Facebook generating a medium term profit with better, targeted advertising, I'm still struggling to see where Twitter is heading (and that is speaking as a user).

And if that's what we've had over the last year, what does the next twelve months hold? The industry and the web being what it is, there are no doubt some surprises out there and I'm not going to try and call them. What I would like to do is briefly look at the scheduled releases and try to extrapolate some of the trends that we have seen. From the big players, we have Microsoft's Office 2010, Apple's Tablet, and Google's Chrome operating system to look forward to, as well as the Android OS for 'phones. I think it's also worth considering the influx of 'apps' into our lives and where the so-called Web 2.0 technologies are taking us. Finally, the mysterious 'cloud' that is appearing in the media is worth a word or two.

Office 2010 has been released as a beta test version, which I've been running for a few weeks now. There has been some refinement of the new interface added in Office 2007, which has also been belatedly applied to Outlook, but I must say I've not come across any major improvements. Microsoft Groove, mentioned above, has been rebranded as 'SharePoint WorkSpace' although this is misleading as it provides an alternative to Sharepoint. On the whole, I can't see much incentive for business to upgrade, except for the consideration that this is the software where the most focus will be for security patches and ongoing development.

Probably the most exciting hardware launch of the year, assuming it actually happens, of course, will be Apple's tablet, essentially an iPhone the size of a netbook. The iPod Touch and iPhone have taken the introduction of internet technology into our lives and stepped it up significantly: there has never been so much development - or purchasing - of small applications. The only limitation to these devices so far has been size and the tablet, roughly ten inches long, will get around that. (The fashion industry has been trying to introduce a 'man bag' for years and that may now happen as we need something fashionable in which to carry around our iSlates or whatever they end up being called.)

Less exciting but possibly of equal significance in the long term is Google's Chrome operating system. It was perhaps a bit confusing giving it the same name as their (slightly underwhelming) browser but there is some logic. As I understand it, the Chrome OS will run all the components of a netbook - sound card, monitor etc - but from an application perspective will consist of little more than a browser. If you were a confirmed user of Google docs as well as other online apps such as Spotify, then this will make some kind of sense. Whether people are willing to run everything off the web and keep nothing local is, I think, very doubtful in the short term but Google are in that elite club of companies who can afford to play a very long game, laying foundations now for operating paradigms that may only be widely adopted in a few years' time.

Similarly, while its Android operating system and associated mobile devices (such as the Nexus 1, also due for release in 2010) may seem like a lost cause when placed up against the iPhone, Apple have made notable mistakes and had significant failures in the past - indeed their fortunes were wholly resurrected by the return of Steve Jobs and the development of the iPod - and I think it's safe to assume that Google has its eye on the long game here, too.

Whether users have an iPhone or an Android device, though, they will have one thing in common and that is an increasing appetite for apps to run on those devices. Indeed, as we update our status using the Facebook app and search for items to buy through Amazon's app, it's tempting to think that all these technologies are converging. The truth, though, is more subtle than that. The internet has provided us with a common platform and for the foreseeable future any snapshot of our personal and professional technologies is going to consist of the devices that we use to access the internet and the applications we use to manipulate the data that we store there. And that includes business information, of course.

Any forward looking business will now have a web presence of some description and probably be storing and maintaining data in that environment: we now have an increasing range of options for accessing and using that data. One question that naturally arises from this is how do we anticipate how much space and processing power we will need to support these new ways of working? The answer to this, in theory, is 'the cloud'. At a high level and from a users perspective, this is an environment that 'extends' to meet the demands placed on it. Thus, as you require more space or processing power, the cloud dynamically adapts to give you those extra resources. I'd say it's not where it needs to be yet - not least because it's not PCI compliant - but I would be very surprised if it is not an integral part of the way we use the Internet in the next two years. Certainly at Meantime we will be migrating some simpler sites onto the cloud this coming year.

In conclusion then, there doesn't seem to be any reduction in the onward momentum around every aspect of the IT industry as we see exciting and innovative developments in hardware, operating systems, infrastructure, applications and communications. As ever, the trick for business is to stay aware of these developments, keep a safe distance from the 'bleeding edge' (neither too close nor too far behind) and to be constantly aware of where these changes can benefit the way in which we all work, considering when they can and should be introduced. One thing that I do anticipate this year for our company is the beginning of requests for iPhone applications for our clients, almost certainly the extension of their extranets out onto the iPhone platform as a means of communicating with the staff, suppliers, clients and customers. And I'm very excited about that!

Tuesday 29 December 2009

You can have what you want but only if you ask for it.

Sometimes when I'm about to start writing a post, I think to myself that I can't write about something so obvious but then I usually consider the incident that has prompted the post and realise that, especially when it comes to IT, there's a lot of truth in the old adage that there's nothing common about common sense.

In this particular case, I was talking to an acquaintance who has a friend - let's call her Julie - who works for a well known IT provider. Julie's job is to sort out what this provider is going to deliver after a contract has been signed. In this particular case, the company for whom she works has just signed a huge contract with a well known mobile 'phone company. A project has been determined, a cost agreed, a contract signed and now Julie has to work out what actually will be delivered as part of that deal.

Putting this into a household context highlights the absurdity of the situation. Let's say you decide to have your kitchen refurbished. Uncertain of exactly what it is you're buying, you might decide to engage a large, nationally known company to come and measure up with a view to doing the work. A representative turns up at your house, you say to him "I'd like a new kitchen, please". You'd certainly expect him to take a look at your kitchen and maybe even ask you for a few ideas about what you want.

At that point, would you sign a contract and agree to hand over your money upon completion? Of course not. You'd want to see some designs, talk about materials and colour schemes, perhaps look at the prices the company was going to charge for your appliances.

So why is business so very different? Well, to be fair, it isn't always. We do find that with a lot of SMEs, particularly those owned by one person or a small group of people, a great deal of care is taken to understand where the money will be spent. But as a simple rule of thumb, we find the larger the business, the less detail we are asked to provide.

However, whilst a relaxed attitude such as this might seem like an opportunity to get away with being a little less diligent, we like to be absolutely sure what it is that we are being asked to deliver. There must be a specification of some sort. Even, for a small job, say a text change on a website, where the request might take the form of a 'phone call, we will still follow up with an email to confirm what we are proposing to do. But for a job that is going to cost tens of thousands of pounds surely it makes absolute sense for both sides to have a clear specification?

There are a couple of principle reasons, in my experience, why this doesn't happen. (Three, if you count sheer negligence.)

Firstly, the client doesn't understand what they are buying. This is a very legitimate problem. As I have mentioned before in my postings, people are usually pretty good at buying something they can understand, business cards or brochures, for example. Buying a website and, particularly, software is very different. For example, two years ago we put a site live for a new client. It was a fairly straightforward build job and it had passed through UAT and been signed off by the client without incident. So I was surprised when the client rang with a complaint: they had looked on Google and they couldn't find the site. Now I was able to explain that Google and other search engines don't work instantaneously but there was no reason for the client to know that. This isn't a great example because it would have been hard to trap this expectation at any point in the development but it does highlight the fact that you have to work hard to anticipate and manage your client's expectations.

The other problem is when the client doesn't have the time or inclination to get into the detail of the work. A very good example of this is when a client wants a bespoke e-commerce site. During my initial conversations with them, they will often say something along the lines of "we both know what we're talking about: just an e-commerce site". However, when the site is delivered and suddenly it does get some attention, you can find that what the client was talking about was Amazon.

As a footnote to the examples above, this leaves negligence. There used to be a saying that "no one ever got sacked for buying IBM". The implication there being that if you hire a big 'name' company, no one can blame you if the deliverable isn't up to scratch.

So, the bottom line here is that a decent specification protects both sides in the business relationship. As a provider, we know exactly what we're committed to delivering and the client knows what they are buying. This enables us to quote price and time-scales accurately: in fact, I don't know how we would stick to our promise to deliver on time and on budget if we didn't do this.

For larger jobs, we may even charge for the preparation of the specification. This is not always an easy sell but in addition to protecting both parties once it comes to development, it also means we have the time to do really detailed design and analysis. Furthermore, the client can use the specification as a tender document to get the best price for development. I'm pleased to say that, so far, the development work has always come back to us but with the specification done properly, this is not a done deal.

Sunday 29 November 2009

Your databases are your crown jewels, so treat them with care.

For what ought to be a fairly precise industry, computing is fairly weak in a few semantic areas and one of those is the term 'database'. A database is, essentially, any store of data: it might be an Excel spreadsheet of customer names or even a Word document of contact addresses.

More properly, we use the term to describe a formally structured or organised datastore of information. These datastores are created, managed and accessed by specific software, the most common example being Microsoft's Access.

The data is stored in different 'tables' and these are linked together using the data items. So, for example, customer names and details might be on one table while the customer addresses are on another. Each address is on a separate row and has a unique reference. This reference is also held on the customer rows to which it applies.

The example above consists of two tables but a small business system will comfortably have ten times that number. Indeed, database design is an art in itself. When we take on a new project and the initial business analysis is complete, we well sketch out a data model - an initial design for the database - to check our understanding and identify areas requiring further analysis. Over the years we have found that once we have the data model right, many aspects of the system we are constructing will fall into place.

As well as being the foundation of a new system, the database remains the most important feature. Screens can be badly designed, load times may be poor, navigation may be unintuitive but these are all faults that can be fixed and, once fixed, there will be no sign they were any different.

Let me break out here and tell you a story from earlier in my career. I was working as a test manager on a migration project for a large financial institution. All of the existing data - clients, accounts, balances etc - was being moved from a system that was twenty years old onto a new one. My job was to make sure that all the data transferred correctly from the old system to the new one.

Everything went well in testing and the time came for us to do a trial migration. As part of the migration process, we generated a lot of reports, so we could tally the number between the original system and the new one: number of clients, number of accounts, total value and so on. We even threw in some logging with no immediate business value - average number of accounts per client, mean total per client - just so we could make sure all the numbers matched.

The testing had been thorough and I was very pleased with what we'd done to date so it's no exaggeration to say that I was horrified when our test run against live data returned results best described as nonsense. I came clean at the project board meeting and assured my colleagues that we'd get to the bottom of the matter.

Two weeks went by and the results against our test data continued to be spot on, as you would expect, but the more we delved into the live data, the more confused we became. Armed with a set of inexplicable examples I returned to the project board and a member of the user group came clean: at some point in the system's history, there was a bug in the code that meant that the relationship between a client and their accounts was not recorded properly. Without these relationships in place, of course, we had no way of doing a meaningful comparison between the old system and the new one. Furthermore, we had to - effectively - build bugs into the new system to accommodate this 'junk' data from the old one.

Quite apart from the importance of testing, this story emphasises the fact that if there is one aspect of an IT system where you cannot afford to make mistakes, it is with your data. Once data is lost or corrupted, there is no easy way of correcting the problem. Indeed, having 'holes' in your data can commit you to a future of weak software, where the processes and validations have to be weakened to compensate for the mistakes that you have made in the past.

When businesses change hands, websites and IT systems are assets of the sale and so, of course, is the data. If you aren't selling your business, then your data remains at the heart of what you do. Either way, it is vital that you take care if the information that is at the heart of what you do. Here are my recommendations for doing just that:

- Make sure that you, your IT department or your IT provider understand your data.
- When building a system, ensure that your data is captured in an appropriate structure.
- Back up all your data daily.
- When you make changes to the system, particularly - but by no means exclusively - to your data model, then insure that you carry out the appropriate amount of regression testing to ensure that your data is not adversely affected.

Prevention is better than cure, but what do you do if you have data that is corrupted? You need to cater for the existing data that is substandard but that is no reason to weaken the processing around your new data. The simplest measure to take is to flag your old data so that validations know when to apply strict rules and when to apply weaker ones. An example would be if you insist that anyone registering with you has a post code but this rule was, at some point, not enforced. By flagging those addresses with no post code as, say, 'weak data', you can ignore the post code validation when processing those addresses, whilst being as rigorous as you like with data that is not affected.

The bottom line here is to appreciate your data. Your understanding of your business almost certainly maps down to the data (whether you are aware of that or not!) and you must make sure that the people who build and manage your systems take the time to listen to you and thus share that understanding. If your data is corrupted or lost, that is equivalent to losing a filing cabinet in a fire, so you need to protect the data you have and treat it with respect. I think I have mentioned before that of the businesses located in the twin towers, a third went under after 9/11 because they had no backups of their business data.

Wednesday 21 October 2009

Mind the gap! The Ecommerce Expo 2009.

A few years ago I set my parents up with a broadband connection via Demon Internet (now Thus plc). As they live in London, they were able to get a fast service - at least for the time - and they never had a problem with it: they went to the PC, the broadband was available and worked without a hitch. About eighteen months ago, however, my father saw an advert for TalkTalk, offering broadband as well as 'phone services for only £6.99 a month, which was a third of what he was paying Demon.

Perhaps I was lazy or perhaps I didn't feel up to trying to argue against TalkTalk's excellent advertising campaign but I rolled over and said it sounded like a good deal, so why not go ahead? As you might have already guessed, there followed weeks of 'phone calls to support numbers, promises to call back that were never fulfilled and a very unhappy father. Even now, over a year later, the service is patchy and more akin to what one might expect in a cottage in Cumbria, five miles from the nearest exchange.

So, having set the example of how the marketing of a product can be very different from the actual delivered goods or services, let me turn my attention to this year's Ecommerce Expo. Yesterday I went down to Earls Court* for the day to catch up with where the sector is regarding Ecommerce and its related industries.

There were certainly some interesting ideas around payment methods, although nothing that struck me as an alternative worth considering to SagePay, which is our preferred solution for ecommerce payments. One apparently global company I spoke to even seemed surprised at the notion of validating cards but not taking payment until shipping. Have they never used Amazon?

My journey down was certainly made worthwhile by the invaluable twenty minutes I had with a technical advisor from Rackspace, who verified some of my concerns about cloud computing and made me think that we'll revisit that next year. The alternative solutions that he offered,though, were sound and intelligent. As a company, they continue to impress me.

Of course, I spent a lot of time visiting the stands of our competitors, most of whom were selling package solutions. (Dydacomp were there touting their MOM product, although they seemed to have less angry customers around their stall than last year, possibly because they have belatedly managed to release a PCI Compliant version.) What was notable about all the stalls - bar one, which I'll come to in a moment - was the fact that they all seemed to be offering the same things - a better customer experience, better sales, email marketing, SEO (of course), more effective order processing and so on - but there was very little in the way of customer testimonial. I must say I was very envious of the brand management company who had a glowing reference from Sainsburys and the PayPal stand predictably had some impressive names and logos on it but they were the exception.

However,I know two or three of these testimonial-free companies by reputation (from their ex-clients), and I know that the service they offer is not always as slick as they would like. And, to be clear, I think that's true for any small business. The problem is, then, this gap between the marketing and the deliverable. Spend enough money with a marketing firm, rent sufficient floor space at the Expo and your company can look like a million dollars. But, if everyone is saying the same thing, what's the point?

What was the one stall that was different? UKFast had by far the largest stand and they had set up a Formula One car with a plasma screen in front of it, so vistors could play a racing game. They backed this up with a bar and half a dozen attractive young women dressed in shorts and high heels. My knee jerk reaction was "how passé" but ultimately if there was a game to be played at the Expo of having the most attractive stand, then rather than big posters with a load of claims they might or might not be able to deliver on, UKFast went straight for the jugular and they "won".

I wouldn't claim to be the first to highlight the gap between the well-functioning marketing department and the product or service being delivered but it did strike me yesterday that this year's Expo was all about saying what your company could do without really backing it up. In a way, this reflects the issue we had with our own website: when there are so many sites out there telling you that their company can build you the best website, ecommerce solution, SEO etc etc, then how do you compete and stand out? Surely the only differentiator is the testimonials page that tells you not what a company says it can do but what it has actually done. Here's ours: www.meantime.co.uk/testimonials.php.

*I really will send a box of chocolates to anyone who can explain the apostrophe anomaly between Earls Court and Earl's Court Road.

Wednesday 30 September 2009

31 flavours of testing

So, as promised, here is the post about the different types of testing. Maybe there aren't thirty-one but there are perhaps more than you might expect, although I think all the ones detailed here will make absolute sense to you.

The first three derive directly from the documentation that is used for large projects but which also has its analogues in some smaller applications. The relevant pieces of documentation are: the business requirements document; the system specification; and the technical specification. I will talk briefly about each of these in turn:

Business requirements document: As its name implies, this document describes the user's requirements. It is a scoping document for the project and describes, at a high level, all of the processing and functionality that is required.

System specification: This describes in more detail how the system will be built, perhaps including screen mock-ups, business rules and a database schema.

Technical specification: This is the document that is used by the developer. It will include specific rules around input validations, how the database should be updated and so forth.

The three main branches of testing - certainly the three that are most commonly used - correspond to each of these three documents. (Sometimes they are shown as a "V model" with the documents on the left arm and the corresponding testing on the right.) These tests are:

Developer testing (sometimes known as 'unit' testing): as its name suggests, this is testing that is carried out by the developer. This is the most basic form of testing yet, in many respects, the most important. Indeed, there is an old IT adage that says that bugs are ten times more expensive to fix for each step down the testing path they remain undetected (not least because of the regression testing involved (see below)).

At this stage the developer should test all the screen validations and business rules relating to the screens on which s/he is working. They should also check that the reads and writes from and to the underlying database are working correctly.

It's worth re-emphasing the point that any issues missed at this stage will slow down later stages when they are discovered, returned to the developer for fixing and then the testing is repeated.

System testing: Once the developer testing is complete, then the entire system can be pulled together for an end to end test. This form of testing is more scenario based and runs along the same journeys that will be used in production. So, for example, testing an e-commerce appliaction, a system tester would add products and prices, then pose as a customer and make purchases and then ensure that the orders are recorded and stock levels are amended correctly.

User acceptance test: I have blogged about this in some detail before, so I will just say that this is the testing where the user can ensure that what has been delivered matches the brief.

So, if those are the most common forms of testing, what other types might you come across? I have described half a dozen others, below:

Implementation testing: moving code and database changes through test environments needs to be a closely managed process but the move into a live environment can be slightly different and therefore needs separate testing. So, for example, in an e-commerce application, the live transactions to the bank will only run once the software is live. This means this part of the process can only be tested in the production environment.

Regression testing: A test cycle - i.e. a series of related tests - is invalidated as soon as any of the components that were tested is changed. Of course, sometimes it is necessary to change components - if a bug has been found or if the user requests a change to the process - and then the affected test cycles need to be re-run and this is called regression testing.

Volume (or 'bulk') testing: As more and more data is added to a system, so performance begins to change: database response times may slow, screen load times can be affected and lists may become unmanageable. Sometimes these issues can be managed as a system grows but if a change is being released to a large, existing customer base, then it is essential to test against high volumes of data.

Load testing: this is related to volume testing (indeed, the two are sometimes combined for operational acceptance testing or OAT). Load testing involves having many, many users accessing the system simultaneously. This can be difficult to simulate and there are specific tools - such as Astra Load Tester - that can be used (at some expense!).

Automated testing: Sometimes the same test cycles need to be repeated over and over again. An example would be testing a web application against many different operating systems and browsers. There is a high overhead to automated testing; test scripts must be changed to mirror any system changes and, of course, the scripts need testing themselves. However, it does have its place.

Using reports for testing: Sometimes a system can, in part, be used to test itself. If a system has a decent reporting function, then that can be used to check that the system is correctly recording its own activity. So, if the testing started off with twenty widgets in stock and seven have been 'sold' in the tests, then the stock report should show thirteen left. If it doesn't, then either the system or the report needs debugging.

Part of the skill of testing is understanding the appropriate tests for an application: a simple testing might not need a separate system test for example. However, two types of testing should always take place before a change is put live: the testing done by the development team and the UAT carried out by the client.

Tuesday 15 September 2009

What successful testing looks like.

If you were feeling reckless (or just had a lot of time to fill) you might ask me where I.T. goes wrong. Amongst the many theories and anecdotes that would ensue, I believe there is one theme that would crop up repeatedly to such an extent that ultimately if someone were, in turn, to ask you for your thoughts on the source of I.T.'s failings, you would say "testing". (Or, perhaps, "would you mind if we talked about something else".)

On Saturday I met up with a man for whom I used to work at the Royal Bank of Scotland Group. We had both been successful test managers - indeed, Cameron still is - but we agreed that ultimately you only need three things to achieve that success: common sense; tenacity; and no desire to be popular.

Testing is a simple science. In my next blog I will talk about the different types of testing but, at a high level, testing is just about making sure that the software works the way it is supposed to. What could be simpler? you might reasonably ask and the answer is not much. Assuming you have good communication with your client or, better yet, a decent system specification document, you will know what you are building and what the system is supposed to do and, therefore, what you need to test.

For a simple website, that will mean making sure that there are no mistakes in the text, that the images all load and that the links work, that your SEO is in place and that your site conforms to standards around accessibility and W3C. For an e-commerce site you will, amongst other things, check the product search, that items can be added to the basket, that VAT is calculated correctly, that your secure certificate is in place and that a customer can actually complete a transaction. And so on.

So if testing is that simple, and it is, how on earth has that ended up being a prominent and consistent a factor in I.T. failures?

Firstly, I think, testing has never been a strong feature in I.T. I was on my fourth I.T. role when I encountered my first colleague whose job was, specifically, testing. He introduced himself as a system tester to universal bemusement. A few years later, in 1995, in fact, I worked with a chap called Graham Bradford who speculated that testing could become big business. I don't think Graham had anticipated the Millennium Bug but he was absolutely right.

By coincidence, I made an uncertain move from systems analysis into testing the following year when IBM interviewed me for the wrong job but gave it to me anyway. In the early days I was delighted to find that I was apparently being paid for old rope - for my common sense, in fact - but I quickly learnt where a test manager proves his worth. Time and again I have seen the time set aside for testing on a project plan effectively viewed as contingency. As development delivery dates slip, the go live date does not and the element that suffers is testing.

And this is where the tenacity comes in: if, as a tester, you are told that you are going to receive the new system for testing two weeks later than planned, then you need to make sure that the live date is pushed back by two weeks, so that you don't lose the time you need. Project managers do not like this. And that's why you need to be prepared to be unpopular.

Furthermore, you need to insist that any change that is applied to a system needs to be tested. A few years ago I was working for a company who wanted to make some "simple changes" to the website. We had a lot of clients using the site for financial transactions and I insisted that any changes needed to be tested. I was told that the changes were not functional and that there was no need for testing. I dug my heels in and told the project manager that whilst I couldn't stop the changes going live, I certainly wouldn't sign them off. Eventually, with an attendant dip in my popularity ratings, the testing was authorised.

Lo and behold, we found some issues. There was no ticker tape parade and no bounce in my popularity, just a few gripes about how I couldn't have known there would be any bugs. Well, of course I couldn't. But at least the next time a "simple change" came along, I had a good argument to fall back on.

So, what does successful testing look like? Well, in detail, it looks like a test strategy and test plans and test scripts and good test data and a strong test team. But most importantly it looks like a decent amount of time allocated to testing and for that time to be guarded jealously and not squandered to compensate for other problems on a project.

In conclusion, then, poorly tested projects result in dissatisfied users and lost customers. They require lots of support once they are live and consequently have a continuous and unpopular ongoing cost. Successful projects stick to their plans, including testing, are candid about their slippage when it occurs and ensure that when the system does 'go live' it works as it was intended to. A poorly tested product offers a constant reminder of the problems with I.T. A successful project invites further development and, as software developers, that is the goal we should be pursuing.

Wednesday 5 August 2009

What I Talk About When I Talk About I.T.

I recently read Haruki Murakami’s book ‘What I Talk About When I Talk About Running’ on the basis that it was recommended to me by a couple of people and also because I am keen, if amateur, runner. However, much as I enjoyed the book, what really grabbed me was the title: I was really set thinking about how the subjects we pick when we talk about a topic can help us to better understand our approach to it, and also help us to think about what we do well. Of course, we can also gain by considering the subjects that we don’t discuss, although it is, of course, always harder to spot things that are missing.

That said, I know that when I talk about I.T., one thing that I don’t talk about – unless asked – is code. Unless they are complete frauds, I would normally work on the assumption that anyone employed as a developer is actually able to read and write code. Furthermore, by and large a good developer can code using different languages on a given platform, as the concepts are the same: it’s just the syntax that is different. What I am saying here is that within I.T. development, the coding should be a given.

What is important is the framework around that code. So, for starters, let’s talk about the people who do the development. I’ve already said that I’m assuming these people can write code, so what makes for good developers? For a start, they need to be effective in a team environment. I cannot emphasise this enough. For example, developers need to be able to participate in a discussion about solutions and accept that, sometimes, their solution won’t be the one that is adopted, yet go away with good grace and cheer and work on the accepted solution. I have any number of anecdotes of where a developer has gone off and done it their way, regardless of what was agreed, and for this to come back and bite the entire project.

Good developers will naturally put comments in their code - describing what they are doing - because it is implicit in their mindset that, later on, someone else will come to look at it. I can clearly remember sitting once at a code handover when a developer was leaving and, in response to a question about how a function worked, could not even find the lines in his own code.

As a team player, a good developer will accept that he is part of a process, a project life-cycle, and that as he may rely on an analyst for a clear, concise specification or a project manager to effect strong change control, so there are other developers and testers who are dependent on the punctual delivery of working code. It follows from this that the developer will focus on what needs to be delivered.

At Meantime, 50% of our resource consists of developers. Around that development function, we have project management that ensures that the user requirements have been established and agreed, and that any changes to those requirements are introduced in a manner that is controlled and avoids confusion. The project manager for a piece of work will identify the resources required and plan their work accordingly. It is this discipline that enables us to deliver our clients’ requirements on time and on budget.

Business analysts spend time with the client, discussing their business and their requirements, making sure that the work has clear – and preferably demonstrable – cost benefit and also that the proposed solution is what the client wants and needs. The results of this work will result in some documentation which comes under the term of a business requirements document. For a small piece of work this may be just an email but, at the other end of the spectrum, I have just completed one that came to over seventy pages.

Once the requirements are signed off, then the business analyst then works closely with a systems analyst who will turn the business requirements into a specification for the developer. Again, the detail in the specification will vary from project to project but it should clearly set out the functionality that must be completed for the work to be considered finished.

I will write in more detail in a future blog about responsibilities for testing and with whom they lie but once the developer has completed his own testing then – for any significant piece of work – the code should go to a system tester. Their job is firstly to ensure that everything in the spec has been delivered and is working correctly and also to test various scenarios.

Once the testing is complete, the application can go to the client for them to test in a dedicated User Acceptance Test (UAT) environment. The user should be checking that everything in the business requirements document has been delivered and also that the system is intuitive and usable from their perspective.

All of the above are things that I discuss with people, mostly client, when I talk about I.T.: the importance of understanding what the user wants, sometimes when they aren’t clear themselves; the amount of work that needs to be done before a single line of code is written; the enormous value in communication with clients (all our developers talk to our clients); the importance of developers who are highly effective team players; the discreet testing function; and the handover to clients and UAT experience before code is made live.

Generally, I define the above as I.T. culture: people, processes and practice. Where I have participated in (or observed) successful projects, the culture has been positive and powerful, celebrating its own successes and understanding where it gets things right. It is a cliché but those projects have a ‘can do’ attitude. There is a prevailing team attitude that doesn’t depend on artificial glue like paint balling or getting drunk together but rather on the brilliant momentum that is gained by working together and doing a job well.

(With thanks to Haruki Murakami and Raymond Carver.)

Monday 3 August 2009

Why IT projects fail: an anecdote

A few years ago, in one of my last freelance roles, I was in a test management role for an international investment bank. My project manager reported to a programme manager one of whose other projects was failing. Three times it had missed its implementation date and the test function had been identified as the part of the process that was letting everything else down.

Since the testing for which I was responsible was proceeding well, I was asked whether I would cast an eye over the test plans for this other project and perhaps “sort things out”. Consequently, I met with the two people who were managing the testing and spent a couple of days with them, learning about their project and the testing that they had planned and attempted to carry out.

They were both clearly competent people and they seemed satisfied with the capabilities of the people working for them. Furthermore, they had a good grasp of their project and I couldn’t find much fault with their testing (and people will approach the same task in slightly different ways, anyway). So, at the end of all their explanation, I asked the simple question: so what’s going wrong?

The explanation was straightforward and will be familiar to anyone who has been involved in testing: some dates on the project plan had slipped but the deadline had not been changed and it was the testing that had seen its time budget diminish to make up for the slippage. The testing had not been completed and the implementation had been aborted. A second run, and then a third had been attempted but each time with only a small amount of time for testing. Defects were raised but there was no time to fix them and thus the project had ended up in its current state.

I had this meeting at the end of May and there were nine weeks of testing to be done, according to the test plan. So, I went back to the programme manager and gave him the good news. There was nothing wrong with his test team, they had a good plan and the testing – including time for bug fixing – would be complete by the end of August. I said a September implementation seemed reasonable but that an October live date would probably be prudent, especially when reporting up to the board, since it was realistic and contained the possibility of early delivery.

A couple of weeks later I bumped into one of the test managers and so I naturally asked how the project was progressing. It transpired that the programme manager had announced an *August* live date and that the test team had, once again, been set up to fail.

The moral of the story is, I think, self-evident, yet you will hear similar stories from IT people everywhere. If people are not given the time to do their jobs properly, then your projects will fail and your staff will become de-motivated. People should certainly be held responsible for the timescales and deadlines they give but there is no excuse for ignoring what they say and then holding them to account for dates that are imposed upon them.

Sunday 19 July 2009

Data security

Last week I was at a meeting in London with a project team from a publicly funded organisation and we were discussing how Meantime (the company I work for) would receive some data required for the initial stages of the project. One of the people 'round the table joked that they would give us the data on a flash drive as long as we promised not to leave it on any public transport.

Over the last few years, of course, there have been a number of incidents where this has happened - laptops and external drives left on buses and trains - sometimes with very sensitive data being lost as a result. Worryingly, I think that most people assume that it will happen again, which is dangerous; the acceptance that such incidents can be classed as just 'one of those things' makes people more careless.

I believe that the fundamental issue is that companies are very good at looking after data when it's where it is supposed to be. The database may be behind a firewall and access to that data may be only via an application that requires a valid user name and password from the user. The problems start when the data is extracted or reported out and stored somewhere else.

Earlier this year I had a meeting with a client who was so cautious about his data - which was, admittedly, of enormous commercial value - that he disabled his (password protected) laptop's wireless functionality before he'd open the spreadsheet containing the core data. I asked him where he backed up his data to and he produced an external drive from his laptop case. Quite apart from the fact that someone stealing his laptop bag would also have his backup, the drive was not protected and the spreadsheet, which was not itself password protected, could easily be accessed.

Similarly, data is downloaded to disks and printed out to paper reports, i.e. taken away from its secure environment, which are then being handled by people who under normal circumstances would not have access to that data.

However, there is a solution and it's one that well established, easy to use and free.

True Crypt is available for download from the web and you can read all about it here: www.truecrypt.com. And if you don't want to read all that - it is rather techie - then let me just say that I've been using True Crypt for a couple of years now and I can't sing its praises highly enough. It is simple to install and to use and means that no one can access the data without a valid password.

Another alternative is to use WinZip (www.winzip.com). While many applications will open a ZIP file, allowing the files to be seen (if not their contents) it is still possible to password protect the files.

For me, the biggest advantage is that True Crypt is able to turn a whole device into an encrypted drive meaning that if, for example, you have a flash drive that contains your business data, it cannot be accessed at all without the True Crypt software and, of course, the password required to access it.

It goes without saying that we live in an increasingly data driven society and the boundaries around our personal data are increasingly blurred. Businesses cannot afford to be anything but strict and diligent about their data protection: slip ups will certainly lead to a massive drop in credibility - either within your organisation or with your clients and customers - and may lead to legal action and a loss of business advantage depending on the data that is leaked.

Policies and procedures are essential but the use of tools like WinZip and True Crypt offer a concrete method of ensuring those practices are enforceable.

Wednesday 1 July 2009

Case Study: Coniston Corporate UK

I decided to choose our work with Coniston Corporate - www.corporate-embroidery.co.uk - as a case study because while the specifics of the projects are, of course, tailored to their business needs, there are some elements of the software that apply to many businesses, especially those that buy in raw materials, add some value and then sell on to other businesses or the general public.

Coniston sell workwear and other clothing, which they embroider for their customers. Driven by a very capable MD, Paul Reilly - from whom I have learnt a thing or two - the company has grown significantly yet maintained its success over the last few years. When I first met Paul, the company had a set of slick paper-based processes in place but as the business grew the overhead of maintaining the paperwork was becoming a serious overhead and also a risk to the business.

Quite apart from the concerns around pieces of paper getting mislaid and related issues around business recovery, there were some other challenges that were not easy to meet with a paper-based system:

- Ensuring that corresponding supplier orders went out to meet the requirements of the customer orders that were being received.
- Ensuring that when supplier orders came in, that the right customer orders were identified and prioritised for production.
- Reporting on margins to make sure that while Coniston offered the best price to their customers, they were making the right profit to sustain and grow their business.
- Keeping on top of their invoicing and statements, especially as customer orders were not always shipped in one delivery.

As is our usual practice, we took the time to listen to Coniston's requirements but also to understand the business context in which those requirements were set. The enabled us not only to devise the most appropriate software solution but also to deliver an application that was designed to develop with their business strategy.

Briefly, the software works like this: when a customer order is received, the items required from a supplier are automatically added to a supplier order (and the system caters for the fact that these items may come from different suppliers.) At the end of each working day, the supplier orders can be printed for faxing or sent by email, depending on the supplier's preference.

When a supplier order is received, the system identifies the customer orders that can now be processed and, when that work is marked as complete, the invoices are generated. The invoices can be printed for posting, sent as a system generated PDF by email or both.

This brings me to an interesting point. When we build systems like this, they naturally and implicitly hold information about the business itself, which is built up through usage. It is very easy for us then to write reports that the client can access whenever they want, such as number of orders this month, total and average values, comparisons with the equivalent period in prior years and so on. This is incredibly valuable business information and it is available easily and on demand without recourse to us.

It is, I think, apparent from the above, that a system that supports a business in this fashion, saves on tiresome - and error prone – administration. Furthermore, the salaries that are saved by not having to employ extra administrative staff can be seen as a method by which the software effectively pays for itself.

Incidentally, once we had the complete working database for Coniston, it also enabled us to build a dedicated site for the workwear - www.coniston-workwear.co.uk - at a relatively low cost, as well as the 'Coniston Shop' function, which gives Paul the facility to set up online shops for his clients: you can see examples here and here. Incidentally, Paul requires no input from us each time he wants to set up a new shop.

Finally, I would just say that even though I picked the above example because it contains elements that apply to many businesses, it is a source of constant interest to me how different companies ask us to implement them in different ways. Over the last five years particularly, it has become obvious to me how few business needs are genuinely met by a package solution.

Tuesday 9 June 2009

Search Engine Optimisation (SEO)

This post is concerned with the phenomenon of Search Engine Optimisation, commonly known as SEO. This is a subject on which I anticipate making a few more posts, so I thought it would be a good idea to start out with a non-technical trip through its meaning, history, practice and, importantly, the way in which it is sold as a service. We can start with an innocent ideal, 'ideal world' view of the topic before we have to incorporate the corrupting influence of the various companies out there selling 'snake oil'.

When search engines were first devised, their purpose was to catalogue all the information on the Internet, in order to help people locate information when they didn't know where to look for it. For the mutual benefit of the search engine, the person posting the information and the person looking for it, the idea of 'meta data' was incorporated into the web pages. This meta data consisted of a description of the page's contents as well as some 'key words' which summarised the pages content. This content enabled the search engine to better understand what the page was about, the searcher to have a better chance of finding what they were looking for and the publisher of having his content found: everybody benefitted.

However, as the web became more commercialised this altruistic element disappeared. Even before the days when the dedicated SEO companies appeared, unscrupulous web designers would aim to outwit the search engines by, for example, repeating the same word again and again in the keywords. The search engines - or, rather, the people building them - got wise to this trick and began to penalise this activity. Since then search engines have had to become far more clever about evaluating a site's content, actually examining the site in detail, rather than simply trusting the meta data. Some people will even tell you that search engines ignore meta data but this isn't true; the description is often used on the search results page and there are many people out there - who know more detail about this subject than I do - who will tell you that keywords are still taken into account, at least by some search engines.

It was as consequence of this that the phrase "content is king" became popular when discussing search engines and it is absolutely, undeniably true that if your site has plenty of relevant content then you have done 80% of the work. The other 20% is in making sure that search engines can find their way around the site in order to process and index this content and this is why elements such as site maps - which simply tell search engines where to find all the pages on your site - are so important.

It is this process of making a website accessible and easily digestible for search engines that I would describe as 'true' search engine optimisation: the site is optimised for search engines. This True SEO also ensures that every aspect of your site is fully indexed and available, via the search engine, to anyone who is looking for it. This means that all your specialisms and unique selling points are made available to your potential clients and customers.

So what's the difference between this True SEO and the snake oil version, which is promoted by these companies that profess to specialise in SEO? Well, the first thing is to look at what these companies offer. One of the first and worst signs is when they "guarantee" to get your site in the top ten on, typically, Google. It stands to reason that this cannot be guaranteed - what if eleven competing businesses hire these services - so read the small print and marvel at the number of ambiguities and get out clauses. The next thing to watch is that they offer to get your site in the top ten for a number of key phrases, perhaps two or three. So, let's say you are selling sports shoes, they might guarantee to have you in the top ten for "athletics shoes", "running shoes" and "sports shoes". If you are reckless enough to sign up, the first thing they will want to do is to remodel your site and its content to reflect these phrases. This gives search engines a skewed view of your site and takes the emphasis away from the finer detail.

A phrase that is coming into use at the moment is 'long tail' SEO and this is the antithesis of the snake oil SEO. Long tail SEO is related to the fact that a significant proportion of, for example, a retailer's sales will not be in his best sellers. (One example of this that I've read about states that less than half of Amazon's sales come from their top 140 thousand products.) This highlights the mistake of emphasising a few key products: long tail SEO is about making sure that all your products (or services) are clearly visible to search engines so they can be indexed and made available to people searching for them.

It's clear, I'm sure, from this posting that I take a dim view of these companies. Quite apart from their dubious ethics, they make our life at Meantime harder. We carry out True SEO and the results from that can sometimes take months to become apparent plus we never guarantee results. Unfortunately, these other companies lead our clients to expect the undeliverable. However, we know that a well constructed site with good content will work well with the search engines and that our clients's will appear - often in the top ten - when people search for their goods and services.

Thursday 21 May 2009

UAT: what is it for and who benefits from it?

When I was first working in IT in the late eighties, I remember one site where there was a cultural revolution taking place: they were going to start asking the users (or “the business”) what they wanted from IT systems.

Implied in this, of course, is a suggestion that the business took what they were given and that the IT department dictated who could have what. The truth, though, is a bit more subtle than that. Often the business wouldn’t have a clear idea what IT systems could do for them: they had no idea of what was possible and, of the applications that could be delivered, which ones constituted a ‘big ask’ and which ones were straightforward.

Over the last twenty years, the user base in blue chip companies has become increasingly familiar with business systems and it is more common now to see the business working hand in glove with the IT department. Consequently, a way of working has arisen, which includes gathering and documenting the users’ requirements before development starts, and, once development is complete, it is now accepted (and good) practice to give the users a chance to review the new work before it is put into their ‘live’ production environment, where it will be used with their real business data. This practice is called User Acceptance Testing or UAT.

For a completely new system the UAT environment will consist of an environment which is as close as possible to the proposed new live system, so that users can learn to use the new applications. For a change or upgrade to an existing system, the UAT environment should consist of a copy of the existing live system with the new work applied to it. This enables the users to see what has changed and what has stayed the same and also to see how the new processes cope with their existing data.

A word about data: this should, as far as is possible, be a copy of the live data. However, since a company won’t, for example, want its clients and customers receiving email from a test environment, it is necessary to ‘sanitise’ the data to some degree. Similarly, live credit card details should not be moved to UAT nor should the credentials for interacting with the live database or third party systems (such as payment gateways).

UAT provides three main benefits:

Firstly, it enables the users to reconcile what has been delivered against what was in their original requirements. It’s not unusual for there to be a form of ‘Chinese whispers’ as a requirement moves between the users, the business and systems analysts and the developers. The onus, of course, is firmly on the IT people to understand and keep sight of the original requirement and, if that is lost along the way, then the business have every right to refuse to sign off the release when they identify the omission in UAT.

Secondly, it provides an opportunity for ‘scenario based’ testing. A decent IT department will carry out thorough system testing against the functional specification (which is derived from the Business Requirements Document) but it is quite possible for this testing to be carried out properly and to sign the release off as fit for UAT yet for an error to be missed. This is best explained by example: a couple of years ago, we built an e-commerce system and one of the requirements specified that once items were sold, then the quantity should be subtracted from stock. We delivered this requirement, including automatic email notification when stock levels fell below a user defined trigger point. However, it was during UAT that the client pointed out that in order to minimise their costs, they held as little stock as possible, often ordering the required items in only after a customer had ordered them. Consequently, we had to amend the system to allow for negative stock values.

This brings me nicely to the last of these main benefits: less live fixes. If, in the example above, we had not had a user acceptance test, the client's business would have ground to a halt as many items would have not been available to order (as there were zero items in stock). The resolution to this problem was simple enough and took about half a day to apply. That half a day didn’t seem much when the client could be off testing other aspects of the system in UAT but it would have been a very different story if their business had lost half a day’s sales whilst we made the change.

So, who does benefit? As the points above illustrate, I believe the answer is everyone. The users have a huge amount of reassurance that their requirements have been met, without the stress of seeing the work for the first time in a live environment. Where issues do arise, they know they are low impact, affecting only test data and not the live business operation. From IT’s point of view, the department ends up with happier, better serviced users and, crucially, a minimum of high-pressure, risk laden live fixes with the business (understandably) demanding regular updates.

At Meantime, we are often working with users who don’t have blue chip experience but there is no reason why we shouldn’t use our experience and bring the good practice of UAT to our projects. The concept is easy to understand both in terms of execution and benefit, and our clients quickly grasp it. Ultimately, this simple step in the project life-cycle takes away a whole load of the stress and aggravation that is associated with IT delivery.

Sunday 3 May 2009

Case Study: Entrust Social Care

To make any case study worthwhile, it needs to highlight one or more salient points about the service that it is intended to illuminate. I'm starting with this particular case study because it demonstrates three important characteristics of well built bespoke software:

1. It has a clear cost benefit.
2. It improves the way in which the client's business is run.
3. It provides management information about the processes that it manages.

The client in this case is Entrust Social Care, whose website can be found at www.entrustsocialcare.co.uk. The Company provides temporary Social Workers (Locums) to the Public Sector across the UK, who in turn approach Entrust to satisfy their staffing requirements. One of the most important parts of Entrust's business is making sure that the locums are paid promptly after submitting their timesheets.

The locums are paid for the hours they work (sometimes working for different clients in the same week), for their expenses and they are also awarded bonuses. In addition to this, they might opt to take time off in lieu. Each week, the MD at Entrust, Ian Brindley, would process the timesheets submitted by the locums, using Excel to calculate the payments and track the time worked against bonus goals. It was a time consuming process and it was what Ian spent the Thursday and Friday of each week doing. It was boring work, yet vitally important, which is a poor combination. Furthermore, Ian didn't feel able to delegate the work out to his staff.

Ian and I had worked together at JPMorganChase in 2000, and he approached me to ask whether Meantime could do anything to help ease his situation. He explained that his business was developing well and that he was generally very happy: the only fly in the ointment was this weekly business with the timesheets. We spent some time talking to Ian about his business, the specific issue and about possible solutions.

This done we then designed a database that would store the details of both Ian's clients and the locums who were working through him. Over this we laid an administration system that would enable Ian to add and maintain this data. We then repeated this process, adding the relationships between the locums and the Social Work Teams in which they were placed.

The next step was to provide Ian with an easy to use interface where he could select a locum, choose a location where they were working and then enter the hours worked, expenses et cetera for a given week. Finally, we provided the function to produce a report of all the payments required for each locum.

As a consequence of this work, using the same inputs and providing the same outputs, we reduced the timesheet processing from two days to two hours (which is how long it took to type in the data). Additionally, we were able to provide valuable data from the system, simply because the relevant part of Ian's operational data was being processed by it. At the press of button Ian could access powerful business information regarding the number of contracts he had in place, the number of clients and locums he had on his books, plus vital financial information.

So, to summarise, let's look at those three points again:

1. The system has a clear cost benefit. Timesheet processing was taking up every Thursday and Friday, i.e. 40% of Ian's working time. Whilst I'm not privy to what Ian pays himself, I know that the cost of the software was less than two-fifths of his salary and, furthermore, it was a one-off cost.

2. The system improves the way in which the client's business is run. Once the system was in place, Ian suddenly had an extra two working days available in his week, something that would be hugely attractive to any Managing Director. This gave him more time to grow his business plus the confidence that he could manage the additional workload.

3. The system provides management information about the processes that it manages. Without even consulting Ian, common sense would have enabled us to provide useful reports to Ian. In addition to those mentioned above, we were in a position to answer questions like Who are my best clients? How much have I paid out to locums this quarter? Which care clients appear to be using more or less of my services over time?

All businesses are different and that is why they need bespoke software for their IT solutions. For those solutions to be effective, the businesses need to pick suppliers who are demonstrably strong when it comes to business analysis: there is a lot of work to be done before the first coding keystroke takes place. The Entrust project is a great example of how, by listening to the client and working with him, we were able to provide a solution that exactly matched his requirement.

Saturday 25 April 2009

Why bespoke software? (And how to tell if you need it.)

As I’ve mentioned previously, the most obvious reason to develop bespoke software is that you have a requirement that isn’t fulfilled by any package that is available. However, there are a couple of subtle refinements to this argument.

Firstly, there may well be software that is available to you but either it doesn’t work in quite the way you want it to or perhaps it is too complex. Last year we took on a client who was paying £14,000 per annum for a package that handled their e-commerce and stock management. They had decided to put a budget of £28,000 towards a bespoke solution on the basis that they would be saving money after two years. When we saw the feature rich package we made it clear that we could not duplicate all the functionality we were seeing for the budget available but the client quickly put us straight: they only wanted about a third of the functionality but they wanted it to work in a way that made sense to them and the way they worked.

And this brings me on to the second refinement. Typically, in any location, there will be many companies operating in the same sector and some of these will be more successful than others. This may be down to crude distinctions such as cost but over time the biggest differentiator will be down to the way in which each company operates and interfaces with its clients/customers and suppliers. For some elements of that business – payroll, for example – the software that is used will make no difference to a third party’s experience with the company. However, the software that is part of the process is a different matter entirely.

Over the next few days I will put up three posts that detail case studies that I believe illuminate the point that I am making here. However, here are a few indicators that will show if you could benefit from bespoke software.

1. You find you are entering the same data in multiple locations. Many mature companies with quite sophisticated processes find themselves using multiple spreadsheets or a number of software packages. This means the same data needs to be entered in multiple locations and if that data should change, then someone needs to know all the places that it needs updating.

2. Your processes are very ‘paper driven’. It repeatedly surprises me just how far companies can get with almost completely paper-based systems. These can work well until a piece of paper is mislaid or, worse, there is an incident such as a fire, which completely destroys the system. Goods and property can be insured but data is irreplaceable if it is not kept safe and backed up. All our software is web-based and all our clients’ data is backed up every day.

3. Your processes rely on your staff knowing them. That might sound obvious – of course your staff need to know what they’re doing - but there is not only a training overhead involved here, it also means your staff are less flexible and less able to cover for one another. A good IT system should reflect the way your business works and so your processes should be implicit in your software. A package will dictate that process and impose it on your business.

4. You want to share data with your clients. But not all your data, of course. Web-based bespoke software enables your clients to log on to your website and see their data: orders, statements et cetera. This cuts down on calls to your staff.

Ultimately, well-written bespoke software should provide huge benefits and give a great boost to your company. Your day to day business should run like clockwork, with happier clients and customers, less stressed and more flexible staff, who will be free to concentrate on their jobs and not administration. What’s more, having all this operational data in once place provides enormous opportunities for extracting highly valuable management information about the way your company is running.

Monday 13 April 2009

So, what's the point?

I started a limited company - Meantime IT - in 1991. Initially, it was simply a vehicle for my freelance work with a number of blue chip companies and I was the sole employee. However, in the mid-nineties my brother, Warren, and I became interested in the emerging Internet platform and started developing websites. However, we were both frustrated by the limitations of the medium and continued with our day jobs while working on those sites in our spare time.

As the web became more viable as a platform, we planned to make web development our full time occupation but then Warren went to work for Goldman Sachs and I went on to work for the Royal Bank of Scotland, managing the testing of the first 'thin client' version of their Internet Banking software. In 2004 I finally took the plunge and Meantime IT has been running as a software house, working exclusively on the web since then.

However, there is a major difference between working as a limited company to facilitate freelance IT and running an SME out in the real world: as a freelancer there are plenty of agencies out there, taking requirements from their blue chip clients and matching them up with the CVs they take in from contractors. For an SME, especially in today's economic climate, things are a little different. It's one challenge to put together a company that can successfully deliver working IT systems but we also need to tell people about it. Hence, this blog was suggested as part of our marketing strategy and, after my initial reservations, it occurred to me that this would also be a good place to lay out the conclusions of some of our discussions at Meantime.

So, that's the point of the blog but maybe this inaugural posting would be a good place to also ask what's the point of IT? It is a big question but, for further postings to make sense, I think it's one that needs to be asked up front. To be clear, I want to break IT into two categories. (Like many black and white statements, it won't bear close scrutiny but in such a complex world as IT, I'm going to need to take a few shortcuts.)

Firstly, there are packages. I'm using this term to define any software that is built by a company and then released to a target audience. I'm not suggesting there won't have been market research or that the software won't be configurable. Examples of this would include Microsoft Word, Apple's iTunes, Intuit's Quickbooks and Twitter.

Secondly, there is bespoke software, which is what we build at Meantime. This is software that is built for a client to their specification (and the variability in those specifications will be the topic of a future posting). It is really this second category that earns IT a bad name. People may grumble about new releases of package software - Vista is a good example of this - but, by and large, they will work as the authoring company intended. The failing projects that make the news - e.g. apparently anything that is built for the National Health Service - all involve bespoke solutions.

So, if the package solutions work, what is the point of bespoke software? Package solutions do indeed work perfectly when you have a generic requirement and everyone is happy with the same solution. iTunes and Microsoft Word are both good examples of this, as evidenced by the fact that both have made it across a partisan divide: iTunes onto the PC and Word onto the Mac. Even in the world of business, we see package solutions that work but already there is more variation as people buy solutions that are geared up to their size of business and their sector.

The key here is that if you use a package solution for a particular process then you will be carrying out that process in the same way as everyone else who uses that package. This is fine for, say, your payroll but what about those processes that help distinguish your company from the competition? Or what if you have a requirement that is quite specific and, therefore, the target market is too small to warrant a package, so no one has built one?

The key point here is that there is a strong market for bespoke software: people do want and need it. The problem is that so often what is delivered is often very flawed. Common problems include:
  • The software that is built does not satisfy the initial requirement
  • The costs often exceed the allocated budget
  • What is delivered arrives late and is out of sync with the business
All of these issues are bad enough in themselves but further contribute to a mistrust and dislike of the software.

We have seen four decades of development of IT systems yet these problems have never been resolved. There are many books, seminars and theories all circulated which purport to solve the issues and yet matters never seem to improve. I believe that the underlying contributing problems are simple to understand and over the last five years, Meantime has successfully delivered projects that match our clients' requirements, on time and on budget. That is not to say that we haven't had challenging projects, too, and those occasions have only served to demonstrate that the processes to which we usually adhere are absolutely essential.

There is no secret ingredient, no single process that we have up our sleeves. The methods, processes and procedures we utilise require work that many companies and, crucially, developers do not wish to adopt. I will be outlining all of them in future posts, highlighting the perils of ignoring them with examples from industry and, no doubt, the day's papers.