I recently read Haruki Murakami’s book ‘What I Talk About When I Talk About Running’ on the basis that it was recommended to me by a couple of people and also because I am keen, if amateur, runner. However, much as I enjoyed the book, what really grabbed me was the title: I was really set thinking about how the subjects we pick when we talk about a topic can help us to better understand our approach to it, and also help us to think about what we do well. Of course, we can also gain by considering the subjects that we don’t discuss, although it is, of course, always harder to spot things that are missing.
That said, I know that when I talk about I.T., one thing that I don’t talk about – unless asked – is code. Unless they are complete frauds, I would normally work on the assumption that anyone employed as a developer is actually able to read and write code. Furthermore, by and large a good developer can code using different languages on a given platform, as the concepts are the same: it’s just the syntax that is different. What I am saying here is that within I.T. development, the coding should be a given.
What is important is the framework around that code. So, for starters, let’s talk about the people who do the development. I’ve already said that I’m assuming these people can write code, so what makes for good developers? For a start, they need to be effective in a team environment. I cannot emphasise this enough. For example, developers need to be able to participate in a discussion about solutions and accept that, sometimes, their solution won’t be the one that is adopted, yet go away with good grace and cheer and work on the accepted solution. I have any number of anecdotes of where a developer has gone off and done it their way, regardless of what was agreed, and for this to come back and bite the entire project.
Good developers will naturally put comments in their code - describing what they are doing - because it is implicit in their mindset that, later on, someone else will come to look at it. I can clearly remember sitting once at a code handover when a developer was leaving and, in response to a question about how a function worked, could not even find the lines in his own code.
As a team player, a good developer will accept that he is part of a process, a project life-cycle, and that as he may rely on an analyst for a clear, concise specification or a project manager to effect strong change control, so there are other developers and testers who are dependent on the punctual delivery of working code. It follows from this that the developer will focus on what needs to be delivered.
At Meantime, 50% of our resource consists of developers. Around that development function, we have project management that ensures that the user requirements have been established and agreed, and that any changes to those requirements are introduced in a manner that is controlled and avoids confusion. The project manager for a piece of work will identify the resources required and plan their work accordingly. It is this discipline that enables us to deliver our clients’ requirements on time and on budget.
Business analysts spend time with the client, discussing their business and their requirements, making sure that the work has clear – and preferably demonstrable – cost benefit and also that the proposed solution is what the client wants and needs. The results of this work will result in some documentation which comes under the term of a business requirements document. For a small piece of work this may be just an email but, at the other end of the spectrum, I have just completed one that came to over seventy pages.
Once the requirements are signed off, then the business analyst then works closely with a systems analyst who will turn the business requirements into a specification for the developer. Again, the detail in the specification will vary from project to project but it should clearly set out the functionality that must be completed for the work to be considered finished.
I will write in more detail in a future blog about responsibilities for testing and with whom they lie but once the developer has completed his own testing then – for any significant piece of work – the code should go to a system tester. Their job is firstly to ensure that everything in the spec has been delivered and is working correctly and also to test various scenarios.
Once the testing is complete, the application can go to the client for them to test in a dedicated User Acceptance Test (UAT) environment. The user should be checking that everything in the business requirements document has been delivered and also that the system is intuitive and usable from their perspective.
All of the above are things that I discuss with people, mostly client, when I talk about I.T.: the importance of understanding what the user wants, sometimes when they aren’t clear themselves; the amount of work that needs to be done before a single line of code is written; the enormous value in communication with clients (all our developers talk to our clients); the importance of developers who are highly effective team players; the discreet testing function; and the handover to clients and UAT experience before code is made live.
Generally, I define the above as I.T. culture: people, processes and practice. Where I have participated in (or observed) successful projects, the culture has been positive and powerful, celebrating its own successes and understanding where it gets things right. It is a cliché but those projects have a ‘can do’ attitude. There is a prevailing team attitude that doesn’t depend on artificial glue like paint balling or getting drunk together but rather on the brilliant momentum that is gained by working together and doing a job well.
(With thanks to Haruki Murakami and Raymond Carver.)
Bespoke software, web based applications, bespoke business systems, e-commerce websites, website design and development by Meantime Information Technologies Ltd, based in Kendal, Cumbria
Wednesday, 5 August 2009
Monday, 3 August 2009
Why IT projects fail: an anecdote
A few years ago, in one of my last freelance roles, I was in a test management role for an international investment bank. My project manager reported to a programme manager one of whose other projects was failing. Three times it had missed its implementation date and the test function had been identified as the part of the process that was letting everything else down.
Since the testing for which I was responsible was proceeding well, I was asked whether I would cast an eye over the test plans for this other project and perhaps “sort things out”. Consequently, I met with the two people who were managing the testing and spent a couple of days with them, learning about their project and the testing that they had planned and attempted to carry out.
They were both clearly competent people and they seemed satisfied with the capabilities of the people working for them. Furthermore, they had a good grasp of their project and I couldn’t find much fault with their testing (and people will approach the same task in slightly different ways, anyway). So, at the end of all their explanation, I asked the simple question: so what’s going wrong?
The explanation was straightforward and will be familiar to anyone who has been involved in testing: some dates on the project plan had slipped but the deadline had not been changed and it was the testing that had seen its time budget diminish to make up for the slippage. The testing had not been completed and the implementation had been aborted. A second run, and then a third had been attempted but each time with only a small amount of time for testing. Defects were raised but there was no time to fix them and thus the project had ended up in its current state.
I had this meeting at the end of May and there were nine weeks of testing to be done, according to the test plan. So, I went back to the programme manager and gave him the good news. There was nothing wrong with his test team, they had a good plan and the testing – including time for bug fixing – would be complete by the end of August. I said a September implementation seemed reasonable but that an October live date would probably be prudent, especially when reporting up to the board, since it was realistic and contained the possibility of early delivery.
A couple of weeks later I bumped into one of the test managers and so I naturally asked how the project was progressing. It transpired that the programme manager had announced an *August* live date and that the test team had, once again, been set up to fail.
The moral of the story is, I think, self-evident, yet you will hear similar stories from IT people everywhere. If people are not given the time to do their jobs properly, then your projects will fail and your staff will become de-motivated. People should certainly be held responsible for the timescales and deadlines they give but there is no excuse for ignoring what they say and then holding them to account for dates that are imposed upon them.
Since the testing for which I was responsible was proceeding well, I was asked whether I would cast an eye over the test plans for this other project and perhaps “sort things out”. Consequently, I met with the two people who were managing the testing and spent a couple of days with them, learning about their project and the testing that they had planned and attempted to carry out.
They were both clearly competent people and they seemed satisfied with the capabilities of the people working for them. Furthermore, they had a good grasp of their project and I couldn’t find much fault with their testing (and people will approach the same task in slightly different ways, anyway). So, at the end of all their explanation, I asked the simple question: so what’s going wrong?
The explanation was straightforward and will be familiar to anyone who has been involved in testing: some dates on the project plan had slipped but the deadline had not been changed and it was the testing that had seen its time budget diminish to make up for the slippage. The testing had not been completed and the implementation had been aborted. A second run, and then a third had been attempted but each time with only a small amount of time for testing. Defects were raised but there was no time to fix them and thus the project had ended up in its current state.
I had this meeting at the end of May and there were nine weeks of testing to be done, according to the test plan. So, I went back to the programme manager and gave him the good news. There was nothing wrong with his test team, they had a good plan and the testing – including time for bug fixing – would be complete by the end of August. I said a September implementation seemed reasonable but that an October live date would probably be prudent, especially when reporting up to the board, since it was realistic and contained the possibility of early delivery.
A couple of weeks later I bumped into one of the test managers and so I naturally asked how the project was progressing. It transpired that the programme manager had announced an *August* live date and that the test team had, once again, been set up to fail.
The moral of the story is, I think, self-evident, yet you will hear similar stories from IT people everywhere. If people are not given the time to do their jobs properly, then your projects will fail and your staff will become de-motivated. People should certainly be held responsible for the timescales and deadlines they give but there is no excuse for ignoring what they say and then holding them to account for dates that are imposed upon them.
Subscribe to:
Posts (Atom)