w: www.meantime.co.uk
t: 01539 737 766

Tuesday 18 October 2011

Why do large IT projects fail?

One way or another, pretty much everything I’ve done and written about for the last few years has been based on a single premise and that is that IT can be done well and that projects can be delivered to specification, on time and on budget.

Delivering websites and IT systems via a small business has only emphasised – quite painfully at times – the thoughts and experience I had gained over the previous fourteen years working freelance on blue chip IT projects.

Broadly speaking, there are two reasons IT projects fail. One of those reasons is the people working on the project and the other is the way in which the project is organised and run. Many of my subsequent posts will be about the latter but, just briefly, I’d like to touch on the former.

I’ve been in IT for twenty-three years now and every successful project I’ve worked on has had a high proportion of The Right People. The most successful project I’ve worked on was exclusively staffed by people of that calibre.

At Meantime, we have struggled with recruitment. This is partly down to where we are based – the one downside of having our office in the Lake District – and partly down to the fact that IT is not a proper profession. There are no widely accepted formal qualifications – particularly around systems development – and so recruitment rests on the shaky platform of CVs and interviews.

Over the years we have always recruited with optimism and looked to bring out the best in people and, at times, we have been badly let down. We’ve now settled on a recruitment formula, which is as follows: I interview the candidate and Steve, our lead developer, gives them a technical interview. (Steve is also a pretty shrewd judge of character.) If we both like them and, crucially, even if they are not technically quite up to scratch, we ask them to carry out an online psychometric questionnaire, which is followed up by an interview with a trained assessor. We then receive a report and, whatever, the outcome, carry out a third interview.

As you can tell, just from that brief description, it’s a laborious and expensive process. However, experience has shown us that it’s far more cost effective to go down this route than to put the wrong person onto a project, given the long term damage this can do.

I’ll illustrate this point with one brief example. Several years ago we did a project that involved classes and terms. The developer in question, let’s call him Paul, was given a data model to work from and was walked through the use of that data model. At the time, he queried the way terms and classes were related and the database structure and process were explained in detail.

In the end, however, and without consulting any of his colleagues, Paul decided to do the work the way he thought best. The project went into user acceptance test and, as we moved towards the first change in term, it became apparent that Paul’s solution wouldn’t work. I don’t know whether Paul had realised this earlier, certainly he handed in his notice around the time the issue became apparent and left his ex-colleagues to put things right.

You can imagine the stress this put on the company. The issue was not the client’s fault, so there was no extra funding, and we had other projects in progress. Being a small business, we didn’t have any spare staff, so Steve, myself and another colleague, Mary, worked extra hours to turn this around.

The issue here was not so much that Paul had coded the solution wrongly, it was that he decided he knew better than the person (me!) who had talked to the client and written the specification. Even then, the problem was not so much that Paul had his own opinion, it was that he didn’t discuss his intention with anyone.

These days we recruit people whom our interviews and the psychometric test identify as team players, people with empathy for our clients, who want to deliver the best solution for them. Following this method, we’ve employed people who didn’t have our skill set or the right experience, who have turned out to make a brilliant contribution to our team. Ultimately, you can teach people skills and give them experience, what you can’t do – at least, not very easily – is change their nature.

OK, so that was a little less brief than I intended and the calibre of the people contributing to projects will certainly crop up again in my following blogs, but the point I’m making is that all of the other things I’m going to write about, those things that contribute to a project’s success, will not work without the right people.

Despite what they say, recruitment agencies don’t filter candidates, except in the very broadest terms. If a candidate’s skillset matches the job requirements, the agency will forward the CV. Many claim to have interviewed clients and given the percentages that agencies demand, one would expect they have spent serious time vetting them. However, in my experience at least, that is simply not the case.

It’s not enough to like someone at interview or to hire them because they pass a technical interview. For a project to succeed you need people who are team players, people who care about a project’s success, people who want to make your clients happy.

Last weekend, we carried out a data migration. Without being asked, the developer involved emailed me to say he’d be available if needed over the weekend and another colleague, who wasn’t directly involved, rang me over the weekend to check everything had gone to plan. When people take this level of interest, when they care about the outcomes, when people like this are working for you, then, with the right process and organisation, you can deliver successful IT projects.

Friday 15 July 2011

User interface: The 5% that's really 100%

We have a saying in our office, which is that the interface is only 5% of what we do but 100% of what the user sees. To be more accurate, it's usually me who says it, typically to a background chorus of grinding teeth. This is not to say that the rest of the team don't care about the user experience, but when, for example, you've just coded a complex administration function to enable to a client to set up their own algorithms for customer discounts, you're probably expecting a bit of praise and congratulation, and not someone scratching his head and asking whether the submit button shouldn't be a bit further up the screen. (There's a lesson for me, there.)

However, there are two very good reasons for thinking about the user and the interface that they will use.

Firstly, if you've built a system, then you want people to use it and there are four key components to giving them an experience they will be happy to repeat.

1. Visual appeal: People make very rapid, non-intellectual decisions about websites that they are presented with. If the screen is cluttered or badly rendered, the user is already less inclined to engage with it. Screen design is hugely important; it makes the system look like it will work.

2. Guidance: if your user has to study your screen to work out where to start, then you've already failed. It should be clear to the user what they need to do first or what their options are. Positioning the cursor for them, big, clearly labelled buttons, numbering steps: these all go a long way to giving the user the 'no brainer' experience that they want. And a little onscreen text is a simple, cheap way to help the user to understand what's required of them and what's going on.

3. Feedback. Our local tourism board has a purchasing system that it allows some attractions to use. Booking tickets for a show, I selected the number I wanted and clicked add to basket. Nothing happened so I tried again, twice more. Still nothing. So I went to the checkout only to find I had enough tickets in my basket to take most of the people living on my street to the theatre with me. I think the message here is clear; when the user completes an action, let them know it's done!

4. Deference. With software, you can, of course, force users into doing things by preventing them from proceeding if they don't do what they're told. However, this may not give you the results you want. It's all very well collecting, for example, marketing information from your users but if you force them to enter a date of birth, you may find they enter something completely random, thereby skewing your data. In a similar vein, an e-commerce site lost my business this week when they deemed my nine-character-with-a-numeric password to be not strong enough for their standards. Don't throw your weight around: give your users what they want and need, and if you want something from them, ask don't demand.

Secondly, there is a very selfish reason for helping your user to have a good experience with the software you build; fewer support calls. If people need to use the software - let's say it's for a timesheeting system - then if they can't do what they need to do, they're going to call. And, people being people, they probably won't read lengthy help text or instruction guides: they'll give up or pick up the 'phone. One way or another, that will come back to the software provider.

Usability, particularly the second point, can be tested easily during your User Acceptance Testing (UAT). The client knows what they want the system to do, the software provider has built that system. Leaving the client to test the system without initial guidance from the provider puts the client in the same shoes as the user. If the client can't get from A to B or product to checkout without guidance from the software provider, what chance has the user got, when they come to the website or software completely cold?

Usability is about empathy, putting yourself in the user's shoes: put the little bit of effort into giving them some guidance through your design and text, and you'll have happy users and a quiet help desk.

Wednesday 27 April 2011

Security: it's not rocket science

This morning the news broke that Sony has announced that hackers have stolen the details of millions of online video gamers. The Telegraph's report can be seen here. The data includes user names, passwords and, possibly, credit card details.

I'm surprised by this for two reasons. Firstly, the fact that Sony were successfully hacked at all. God knows there are some very, very clever (if misguided) people out there involved in hacking. Anecdotally, a couple of times a year we see evidence of people attempting to hack our servers and we work hard to stay on top of our security. But we're not Sony. Surely a company as wealthy as Sony, responsible for the details of millions of people, should be employing the very best people - really: the very best - to safeguard their systems? The fact that they were breached suggests to me that they are not taking their security seriously enough.

However, the greater part of the surprise, for me, is that it seems store their data in an unencrypted state. I've blogged about this before but for all the times that disks go missing in the post, laptops are stolen or security is breached, no spokesman ever says "but it's OK because the data was all encrypted".

For our most secure data we use a combination of the encryption algorithms built into the database software but also some bespoke algorithms of our own. Thus, even someone with open access to our database couldn't interpret the information that is stored. I don't think we've done anything mind-boggling there; I'm sure most IT companies worth their salt would come up with a similar solution given the same requirement for data security, which just makes me wonder why Sony didn't.

Incidentally, if you have data that you want to encrypt, either on your hard drive or portable data device, I highly recommend TrueCrypt. It's very simple to use and very secure.

Friday 15 April 2011

Working under stress

A couple of years ago, I received a panicky 'phone call from a friend. The company he worked for had been mentioned, along with their website address, on the front page of The Telegraph. The site had started to load more and more slowly, and now it wouldn't load at all. I asked him who'd built the site and where it was hosted. It transpired the site had been built by a man who was away and, consequently, couldn't be contacted. After a little investigation we located the hosting on one of those £50 a year servers. Of course, there was nothing we could do to help and the exposure in The Telegraph was largely wasted.

I was reminded of this by three occurrences in the last month, concerning Twitter, the BBC and Premier Inn. Twitter users will be well accustomed to the application's occasionally flaky service. We don't pay for it, we use it frequently, and we get all sarky when it can't take the very high strain.

On March 29th the BBC website and related services (such as iPlayer) were unavailable. The final explanation given was that there had been a major network problem.

Finally, Premier Inns recently broadcast an email campaign offering their popular rooms for £19 offer. (Popular but elusive; I've never been able to find one.) This was followed days later by another email apologising that the website hadn't been able to cope with all the resulting traffic.

Superficially, these all look like the same issue but, in fact, there are a couple of factors to consider. Firstly, there is normal load. Whether you are talking about a website or an application or any other aspect of your online infrastructure, you need to think about how many users you will have, how often they will use your service and what resources they will use. For example, if you have a popular website that is largely text and pictures, then you simply need a server that is good at spitting out web pages. However, if you are Premier Inn, where your users are accessing a database and using up processing power, then you have more factors to take into consideration.

Secondly, there is the question of 'spikes', i.e. sudden load on your server and infrastructure. These spikes can be quite dramatic compared with normal usage. You might be an online clothes retailer, advertising that your 30% off sale starts at 9am on Monday. Your set up is going to have to cope with something quite different to the normal steady usage of people browsing and purchasing.

Of course, you shouldn't wait until your site is live to consider these issues and you certainly don't want to find out about them on the day of the sale you've spent so much time and effort promoting. But you'd be surprised by how many people don't consider them. I can count on the fingers of one hand the clients who have raised this as a concern with me before I have had a chance to discuss it with them. And that's fair enough. Clients assume that their suppliers are thinking about these things on their behalf.

However, as with so many other topics - like security, DDA, cross-browser testing - the IT industry repeatedly lets its clients down. There are few professional qualifications and anyone with a PC can set themselves up as a web designer or developer. Consequently, it does fall to the client to ask the questions, not to assume that, having paid for the development of their site or application, it will run on machines that are adequate to support its usage.

Friday 21 January 2011

Lush and PCI Compliancy

"I can't believe that a company of this size can be so naive about website security".

The above quote (from a post by 'symball') is taken from an article in today's Guardian about the hacking of the website belonging to Lush Cosmetics. The company have known since at least Christmas Day that they were being hacked and it has now admitted that the hacking dates back to October last year. Customers are reporting that their cards have been used fraudulently.

The sad truth, though, is that that online security is poorly understood and badly enforced. Whilst any company trading online should be PCI compliant the truth is that many online traders are simply unaware of this requirement and many web development companies, particularly those with a strong design or marketing bias, don't have the technical skills to set up a site that is compliant. Certainly, the company working for Lush should have known better to hold onto card details.

However, the problem here is not just about PCI Compliancy. We have had numerous hacking attempts on our webservers over the years and we have a full time systems administrator who keeps our boxes up to date with the latest security patches precisely to keep hackers out. All too often, though, less technical web development companies rely on their hosting company for their security and this simply isn't good enough.

We have 'inherited' websites in the past and had the difficult job of explaining to the client just how much work needs to be done to their site before we can put it onto one of our live boxes. Similarly, we have in the past, (reluctantly) lost clients who were not interested in the ongoing costs of maintaining the security, both through the necessary hosting charges (to constantly review and maintain server security) and the cost of keeping up with the constant changes to PCI Compliance.

If your business sells online, you need to check with your web developers about your PCI compliance and your server security. If you are unsure, then contact a company such as Security Metrics who can do both PCI checks and 'penetration testing'.

And if you are storing your customers' credit card numbers, I would start worrying about this RIGHT NOW.