Gathering Requirements for Agile Development
The company I work for is a Professional Services company. We develop great software for companies. Just so we’re clear, what follows is entirely my own perspective, and my own thoughts. No reflection of the company I work for, nor for the company I happened to be working for at the time. As long as you understand that, keep reading. If you don’t, try reading it again.
We were asked to develop a new application to streamline an existing paper-based workflow. There are huge benefits for doing this, saving time, expediting responses, improving the data integrity… Great. The company, as with so many that I have the opportunity to work for, like to gather and build all their requirements up front. You see, Agile is something the development team does to deliver faster, once the requirements are all collected in a BRD, spec, or whatever it’s being called. But this time, we had an opportunity to develop this product along side the end users and business stakeholders. There was no requirement document, and we were still being allowed to develop.
As a development team with four really great developers with some domain knowledge & experience, we estimated the length of time we what we thought it might take. At this point, we knew the high level objective. We also knew the date that we wanted to get it in the hands of a pilot group. We were pretty sure we could deliver it faster than this target date, and said so. But, we were careful not to over promise, especially since no one knew exactly what we’d need to build. But we had a good idea.
Now, this organization uses Scrum. And by Scrum, I mean that they have planning sessions, daily stand-ups, and iterations. But basically, they take their requirements document and break it into user stories (which really are just broken down requirements). Then, they develop for a number of three-week-sprints, and deliver the requirements as laid-out in the documentation. Wanting to fit in, we tried to follow their process, but quickly realized that without a spec document, their business stakeholders were forced to become more involved with the development. We were writing user stories on the fly, refining them during our one-week sprints, and then delivering them a week later, getting feedback, and building upon what we’d developed. It became obvious that we weren’t following much of the scrum process, but were working in Kanban. It was great. The business stakeholders were getting working software on a regular basis, the development team was working bloody hard, but at a sustainable pace, and feeling really proud as they saw their ideas and suggestions being rolled into this application.
About six weeks into development, we’d been asked to build something a certain way. Which we did. A couple of the developers asked me if I liked what they’d built. I didn’t. I didn’t think it addressed the end-user’s needs, and neither did the developers. But, it’s what we were asked for. Because we were so tightly coupled with the business stakeholders, we showed it to them right away. Once they saw it, they too realized it wasn’t what they wanted. It was what they’d asked for, but upon seeing it, realized it wasn’t right. So they asked us to tweak it. We did. Still not quite right. Tweak a bit more. Hm. Something’s not working quite right. So, rather than continue developing and demoing, we got the Business Analyst and UX Designer to sit with the developers working on this feature and work it out together. I’ll say that again: they worked it out together. It was fantastic. Together, they’d worked out exactly how it should function, behave, and look. Over the course of two days, they’d gone through six iterations of how it could work before landing on the one that actually did work.
At eight weeks, we had a fully functioning product, ready to ship to pilot. We were off with our original estimate as to how long we thought it would take. We’d estimated seven weeks. That’s right. Without doing huge amounts of estimation, breaking the functionality into themes, epics, stories, without a specification document or BRD, we’d estimated and delivered the application with a variance of five days.
Along the way, we’d learnt a whole pile of things, like what wouldn’t work, and what didn’t work, and we’d already made those changes. I guess I glossed over that – certainly worth mentioning… How did we learn what did & didn’t work? We got end user feedback. That’s right. We actually talked to our end users. Us. The UX designer, the developers, and the agile coach actually got to talk to end users. And we did it in five days more than we estimated.
It was a lot of work. Talking to, and working with people, takes a lot more effort than building something off a piece of paper. It means you don’t always know exactly what’s coming next. Or, in the case of the really undefined feature, working with multiple stakeholders to get it as right as possible. Someone told me that we got luckily in delivering it so close to our estimate, since we didn’t do a formal estimation with defined requirements, assumptions and constraints. I disagree. We had some domain knowledge, and an overall view of what we were going to need to do. The team’s estimate wasn’t a guess. It was an estimate. It just happened to be derived without a lot of up front work. And the entire team worked really hard to manage the scope while ensuring the quality of the code was never jeopardized. Adapting to changing requirements and feedback allowed a great application to be delivered. One that continues to have very minimal negative feedback.
I’ve read somewhere that working software is valued over comprehensive documentation. Score one for working software.
Retrospectives
During my last two projects, I’d run retrospectives at the end of each sprint. Nothing earth shattering there. But I got a lot of positive feedback from the team, and genuine excitement when it came time for each retrospective. I know I’m a slightly outgoing guy who likes to have a lot of fun, but something else was going on.
I asked the team what they liked about the retrospective formats I was using. You see, each week I ran it a slightly bit different.
One week, we’d follow The 4 L’s (http://retrospectivewiki.org/index.php?title=Four_L%27s_Retrospective).
Another week, we tried Turning The Table (http://www.scrumalliance.org/community/articles/2013/february/iteration-retrospective-activity-turn-the-tables).
And another week we used – what seems to be a default – More/Same/Less (also known as Start/Stop/Continue), in some variation.
There were others, too, but it hopefully gives you an idea. And the reason I say that the third one seems to be a standard, is that the team informed me that this was the only way they’d done a retrospective before, with other agile coaches or scrum masters. Ever.
So, when I was brought into the team, each retrospective was new, fresh, and engaging. It wasn’t the same thing over and over. There were some ‘games’ where the output wasn’t as good as others. I’m not sure if it was because of that sprint or because of the format. I know for sure that I’ll try them all at least once more. And I do believe that not every format is right for every team. But getting to know the team certainly helps.
There are lots of places to get ideas. Here are a couple of my favourite places for ideas I’ve used (there are lots of others):
My learning from working with that team? Mixing it up really made it engaging for them. The liked the variety, and liked thinking about the last iteration in different ways. Something I’ll certainly continue.