The trouble is that in dynamic environments, the discipline necessary to properly practice scrum isn't really possible. It takes longer to define the requirements than it does to code, and once coded they change mid-sprint.
The trouble with waterfall is that it tries to predict beyond the scope of a sprint, which just isn't valid. Project estimates are asymmetrical curves (likely poisson?) and you can't add them up and expect them to cancel out: http://www.sketchdeck.com/blog/why-people-are-bad-at-estimat...
The problem is like recursive state estimation, except when you break it down just to estimate and add it back up you aren't actually taking any new measurements.
On most projects it's usually just better to estimate based on relative size to your last project, start delivering continuously, and do a forecast of completion (as opposed to an estimate):
The trouble with waterfall is that it tries to predict beyond the scope of a sprint, which just isn't valid.
This assertion is nonsense. There's nothing magical about a couple of weeks such that it forms a boundary outwith lie impossible predictions. The validity of predictions depend entirely on the understanding of the problem domain and the complexity of the solution space.
The single biggest benefit from agile in theory, IMO, is controlling risk by getting the customer in front of the software sooner, so it can be iterated based on feedback. The primary risk being controlled is building the wrong thing. But you wouldn't develop e.g. an autonomous driving subsystem for a car that way.
Agile (scrum, specifically) in practice is too often used simply to chop large tasks into bite-size stories to be fed on a conveyer belt to a team of more or less replaceable programming cogs; the sprint scope keeps blinkers on everybody so they don't look too far in the future, they just keep munching through stories.
And when agile is used in this way, not only can it be demoralizing, but also extremely inefficient: a focus on user stories typically encourages building small features that involve narrow vertical slices through the stack of an application. That's hard to parallelize effectively - related stories will affect the same bits of code and cause conflicts. If you can bundle a bunch of related features together based on how they are likely to be implemented, you can slice them up horizontally, and implement the different layers separately, using things like APIs and data models at the boundaries. This paralellizes quite well at the team level.
It's still not great for software design. It's still a very blinkered approach; you're not going to design an application-specific framework that makes implementing features easy. It doesn't allow any space for experienced developers who have foresight, and relies on refactoring to create reusable domain-specific abstractions. But refactoring isn't a user story, and a team munching on stories isn't in a good position to think holistically about a problem.
If you've got a bunch of good, productive developers, you've got to work really hard to get anything but good software out of them. They will probably self-organise into an effective team, whatever the methodology.
On the other hand, if you have a bunch of inexperienced, mediocre developers (myself included) then "bite-size stories being fed on a conveyer belt" is probably a good way to get productivity out of them. It's certainly a lot better than "you have 3 months to build this enormous system based on this 1 paragraph brief" which is pretty common.
You'll get a blob that gets harder and harder to modify over time. For a throwaway system and / or with piles of money, it may work until the system needs replacing.
Different people working on similar vertical slices through the system leads to slightly different parallel implementations and probably some duplication of helper logic. Refactoring won't get scheduled and it'll become technical debt in the way all duplicated logic is: not a big deal to start with, but an increasing source of bugs and features fixed / implemented in one place but not another.
Inexperienced developers won't be disciplined in keeping their abstractions separate; they tend to intermingle their abstractions so that the boundary is fuzzy. Specifically, they lack layering discipline. Instead of libA using libB which uses libC, they'll pass bits of libA into libB and return bits of libC. When implementing a complex algorithm that joins libA to libB, they'll write code that zips the two together within its convolutions, rather than creating adapters for libA and libB so that the algorithm follows naturally. And they'll model the domain, but nothing much more abstract, and write convoluted procedural algorithms in VerbingClasses (new ThingDoer(x).doThing(y)), possibly with interfaces for mockability.
And to be frank, many line of business applications can cope with this. The developers are cheaper and easier to find, and IT was always a cost centre anyway. It's no way to live if you love code, though.
Of course. If we lived in a world where good, productive, experienced developers were cheap and plentiful, we could do things very differently. As it is, it can be more efficient to pay cheap developers to continually patch (or even rebuild) big balls of mud than pay expensive developers to build it properly in the first place.
And I'm not going to complain, because I benefit from the current system.
If you've got a bunch of good, productive developers, you've got to work really hard to get anything but good software out of them. They will probably self-organise into an effective team, whatever the methodology.
I'm sure there is an element of truth to this, but I doubt it's as decisive as you're suggesting in real life.
If you've got a good team of expert developers, people who are both individually skillful at producing useful code but also good team players and able to co-operate, I'd say you have a decent chance of them self-organising. I'd omit the "whatever the methodology", because these are exactly the kinds of teams who don't want or need some consultant's pet methodology limiting their options.
Of course, there is more to building useful real world software projects than just producing a good design and implementation. Even the most technically brilliant team also needs a clear goal to be effective in practice. Capturing the requirements and turning them into actionable specifications is a significant challenge in its own right, one which requires a very different skill set that even exceptional developers won't necessarily have.
I suggest that one of the big differences between a highly skilled and experienced team and a team with more modest capabilities is that the former will immediately recognise the need for clear specs and try to do something about the lack of them. The kinds of development processes and methodologies we're talking about today are designed in part to shield the latter from the same responsibility, but consequently they also rely on having very good people to do that work instead (and by corollary tend to fail hard if the communication with customers and resulting planning work aren't up to standard for whatever reason).
The big difference is that three months into a two year waterfall project, if someone complains that we're behind schedule, no one believes you or gives a shit. They keep hoping for a miracle.
Three months into an agile project and everyone already knows there's a problem.
"you're not going to design an application-specific framework that makes implementing features easy. It doesn't allow any space for experienced developers who have foresight,"
I wonder if this is an extension of 'programming by poking'[1]. You replace serious thought and planning with a piecemeal, try-it-and-see-what happens process.
I don't know. Programming by poking works pretty well in Haskell, a language that's seldom accused of dumbing down. (And they've recently made it easier with typed holes.)
"It takes longer to define the requirements than it does to code, and once coded they change mid-sprint"
I've experienced one -or- the other of those, rarely both. The only times I've experienced both were due to product owners who wanted to be managers, to bring 'leadership', and so insisted on wasting my time with meetings that didn't actually lead to well defined stories.
If I've had a product owner who wasn't a waste of space, who met with the customer(s), actually understood what they needed, and then met with us to help define a story, then while that story might take a good 10-20 minutes to fully flesh out with good acceptance criteria, it almost never changed during the sprint. We might, on the demo and/or release of the feature, realize modifications were in order, but that wasn't because we did things incorrectly at first; we did them correctly, with something releasable, and useful, and that allowed us to learn something that helped us to refine it.
However, more often I've had product owners who are wastes of space. They'll slap a story together, maybe with acceptance criteria of a line or two, maybe nothing, and then hand it off to the devs to go do. We roll our eyes, make some assumptions for all the missing details, do something, demo it to the customer, and get "That's not what I wanted! Change it!"
The former is infinitely preferable. Even if it's a highly complex feature, that takes a while to get understanding and consensus on, it almost never changes once we do and the devs, customer, and product owner are all on the same page. The problem is it requires a product owner who doesn't suck, and the reality is most of them suck.
We recently lost two or three bigger costumers because of stuff like this.. No one talked to the actual users.. The program was full of functions, workflows and solving problems no user wanted to get solved.
They just never used the programs only for situations like : we have (another new useless feature, please test)
Uh, for me sad part is when I try to make those people write stories in more detail like "what should happen if data fails to load", "what should be default state", "Ok we have adding for an element, how we handle deletes". I got question "So should we go back to the waterfall?". Because we are so agile that not everything should be specified up front. But they do not get the scope, single story does not fall into waterfall...
The story is the last possible moment of decision making before the developers go and develop something. Obviously it may need further refinement at a later date once you have learned something. But, by definition, you now know -everything that can be known before development starts-, because development is about to start. As such, anything, -anything- that remains unspecified, is, again, by definition, undefined. It may be worth explaining that if failure conditions remain undefined, then you will ignore them for convenience, because whatever you decide to do will almost assuredly be wrong. If the product owners want -any- say in what it does in the event of something going wrong, and don't want it to be a surprise, and require even more work, then now is the time to say something.
> The trouble is that in dynamic environments, the discipline necessary to properly practice scrum isn't really possible. It takes longer to define the requirements than it does to code, and once coded they change mid-sprint.
Scrum is not trictly bijective with agile.
The concept of a sprint creates an entirely arbitrary deadline with an arbitrary box of work.
(I have a dog in this fight: XP with Lean trimmings, since I work at Pivotal and learnt it in Labs)
> It takes longer to define the requirements than it does to code, and once coded they change mid-sprint
It sounds like your product owner isn't doing his job. What specifically does it mean for requirements to change? If the specifications were unclear or incomplete, well you should have held out for clear and complete specifications. If you can't do that you have an organizational problem. But did the customer change their mind about what they want? Not likely, but possible I guess. The only thing left is that the owner didn't really understand the customer's needs.
The trouble with waterfall is that it tries to predict beyond the scope of a sprint, which just isn't valid. Project estimates are asymmetrical curves (likely poisson?) and you can't add them up and expect them to cancel out: http://www.sketchdeck.com/blog/why-people-are-bad-at-estimat...
The problem is like recursive state estimation, except when you break it down just to estimate and add it back up you aren't actually taking any new measurements.
On most projects it's usually just better to estimate based on relative size to your last project, start delivering continuously, and do a forecast of completion (as opposed to an estimate):
http://www.agildata.com/keep-it-lean-you-arent-ready-for-scr...
(edit: spelling)