Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like how he tries to supplant agile with something that doesn't even attempt to solve the same problems that agile (like waterfall before it) attempts to solve.

Both agile and waterfall methods attempt to give you a pattern by which you can predict when software will be delivered. That's the main business value. The suits don't care how the devs operate, provided they can get things on time.

Waterfall tried to do this by trying to understand the problem as fully as possible, to liken it as much to previously solved problems, and to involve gurus to say "that kind of problem will take X amount of time to resolve", and thereby create milestones and delivery dates.

Agile instead said "look, we don't know enough at the start of a project to do that. Let's instead keep track of everything we want to do. Let's try and estimate each set of tasks (stories), individually. And then let's rank them in priority. We can measure how good our estimates are, we can modify them, we can generate more data and determine what we'll have at a milestone, and then can either push back the due date, or at least recognize that we won't be able to ship the entire feature set at that time.

Continuous delivery can be done with either one of those (it almost never is in waterfall, but it -could- be). But by itself it offers a business nothing for planning purposes. All it does is allow an immediacy, a "as soon as it's done it's out in front of the users". This may or may not be a good thing to the business, but it doesn't solve the basic issue that business people want to know when they can expect a given set of features to be live.



That's one of the most level-headed and succinct description of the differences between Waterfall and Agile I have read.

I would also add that, in my completely personal and anecdotal experience, they are not one-fits-all methods: in general I find Agile much more suited to developing a "product", while Waterfall makes more sense when the aim is to deliver a "project".


My takeaway from the post is that he wants to replace Agile (clause 1: "individuals and interactions over processes and tools") with an ill-defined process (CD) and a big old pile of tools.

Call me an agile zealot fanboy or whatever, but that doesn't feel to me like progress (and, inter alia, more or less the opposite of what Dave T was complaining about in his own agile is dead thing).


I'm not a management guru, so I might be off-base but Agile isn't about sprinting tasks. Tasks, broken up enough, are much smaller than a sprint. It's about having a deliverable at a certain short term goal post. You cut corners and smash half finished features together to reach that goal so that you have a product you can pivot around (the word "pivot" always makes me throw up a little - maybe... reassess the direction of the project?). With test driven development and continual integration tools you constantly have a deliverable, so there is no longer any reason to sprint. You can "pivot" or whatever at any point in time

The real question is test driven development / continual-integration worth doing? CI isn't too controversial, but for TDD there is no clear answer and it really depends on the domain and what language you work in etc. etc.


So the way I have always handled it -

Product owners create stories. These are titled things like "As a (type of user), I want to be able to X". The point of the title is to determine who this actually benefits. Then, they attempt to define it with acceptance criteria. These are a list of "what does it mean to solve this need". Ideally, it implies a set of tasks, and gives a decent starting point for QA to start testing. This is stuff like "When the user clicks X, the system shall Y" and "Should the system fail to do Y, it will instead (failure mode), and (inform the user? Stay silent? Whatever). Sometimes the product owner needs help from the devs to determine this.

The devs will then add tasks to the story. The story should be able to be completed in one sprint; the tasks are, indeed, much faster. They're tracked only insofar as to see progress towards the story's completion, but they're not nearly as important as the story itself. A story with half of its tasks complete is not done; the feature is not implemented, it's not ready to go out. When all the tasks are done, the story is handed off to QA to vet; at that point the story is done, and it can be shipped.

The dev team is only ever committing to what can be done in a sprint. They should have an idea of how many story points they can handle in a sprint, such that they can work with the product owner to determine the stories they'll work on in that sprint.

When the estimates start lining up with what is actually achieved (that is, the team has a velocity of, say, 40 story points per sprint. And they're completing 40 story points per sprint), the product owner can start planning around it. "We have four sprints until the business wants the next milestone. As such, I have assigned 160 points worth of stories to try to get in for that". And that's reasonable. And then, if anything emergent comes up, or new stories take priority (a 'pivot', if you will), the product owner knows they can't manage it; they either need to replace a currently existing story with that emergent/newly prioritized story, or, they need to slip the schedule.

And that can actually work. It requires honesty, transparency, and a desire to actually get shit done, but it can work. The problem is oftentimes people or cultures value CYA, politics, and 'leadership' over getting shit done, and all of those make agile little more than a scapegoat for why everything is on fire.

And it's only one way of being agile. Kanban, for instance, still has stories and tasks as described, and it still offers velocity, though measuring it slightly differently, but it isn't concerned about the sprint boundaries.


What always annoyed me about Story Points is that it's a measure of complexity, not time. At least that's what Story Points are supposed to stand for.

When you state that a team should know how many Story Points it can handle, it would make much more sense to see Story Points as some measure of time. If you've completed last few Sprints 38 to 42 Story Points, with 2 full-time devs, than one could state that a single developer on average can finish 20 Story Points in Sprint. And if a Sprint is 2 weeks (10 days), that would mean maybe 2 Story Points per day for each dev.

Yet, in every business environment I've been in, the SCRUM fanatics always state that "No, story points are a measure of complexity". In practice, to me, this makes no sense. At least not if one wants to use historic Story Points to estimate how much work can be completed in the next Sprint.


The problem is that time isn't something you can chop up neatly.

For ease of use, let's say that a story point represents 8 hours of actual work. That gives us 168/8 points a week, or 21 points. Of course, we lose 7 of those to sleep, 4 to the weekend and another 5 to the evenings during the week, leaving us with 5 points per week. You probably lose a point to lunch, coffee, bathroom breaks and mingling with coworkers and another point to meetings and interruptions.

That leaves you with 3 points out of 21 potential points in a week, and you aren't getting 3/5 done per day. Using story points instead of hours tries to get people to treat the sprint as a unit instead of time as a unit, since time can be split up to a very fine degree. Just because you work 40 hours a week doesn't mean you can complete 20 2 hour tasks.


I'm not that dogmatic. But complexity should loosely correspond to time, I would think. A deeply complex task is going to entail more work than a simple one.

That said, there's a good reason to say it measures the complexity, and not the time. You want to keep distance between your estimates and time, or else the business people are likely to come to you and say "You said this will take 6 hours, so I expect it tomorrow". Nevermind that it requires another task to be done first, that it's blocked on getting something from another department, that you only ever committed to delivering it at the end of the sprint, and that you underestimated how long it would take anyway (but it doesn't matter because you overestimated something else so it comes out in the wash), -you- said it would only take 6 hours!


Thanks for the explanation - that really clarified things for me. The breakdown between story, task, sprint and milestone is very interesting and well thought out. Good estimate generation is the holy grail


Good estimate generation is the holy grail

If estimating is what you want to learn, James Shore has some good posts on the topic:

http://www.jamesshore.com/Blog/Agile-and-Predictability.html

http://www.jamesshore.com/Agile-Book/estimating.html


Well as a freelancer it's something I've sorta given up hope on being able to do. Thanks again for the information. Hopefully it'll give me some ideas.


The trouble is that in dynamic environments, the discipline necessary to properly practice scrum isn't really possible. It takes longer to define the requirements than it does to code, and once coded they change mid-sprint.

The trouble with waterfall is that it tries to predict beyond the scope of a sprint, which just isn't valid. Project estimates are asymmetrical curves (likely poisson?) and you can't add them up and expect them to cancel out: http://www.sketchdeck.com/blog/why-people-are-bad-at-estimat...

The problem is like recursive state estimation, except when you break it down just to estimate and add it back up you aren't actually taking any new measurements.

On most projects it's usually just better to estimate based on relative size to your last project, start delivering continuously, and do a forecast of completion (as opposed to an estimate):

http://www.agildata.com/keep-it-lean-you-arent-ready-for-scr...

(edit: spelling)


The trouble with waterfall is that it tries to predict beyond the scope of a sprint, which just isn't valid.

This assertion is nonsense. There's nothing magical about a couple of weeks such that it forms a boundary outwith lie impossible predictions. The validity of predictions depend entirely on the understanding of the problem domain and the complexity of the solution space.

The single biggest benefit from agile in theory, IMO, is controlling risk by getting the customer in front of the software sooner, so it can be iterated based on feedback. The primary risk being controlled is building the wrong thing. But you wouldn't develop e.g. an autonomous driving subsystem for a car that way.

Agile (scrum, specifically) in practice is too often used simply to chop large tasks into bite-size stories to be fed on a conveyer belt to a team of more or less replaceable programming cogs; the sprint scope keeps blinkers on everybody so they don't look too far in the future, they just keep munching through stories.

And when agile is used in this way, not only can it be demoralizing, but also extremely inefficient: a focus on user stories typically encourages building small features that involve narrow vertical slices through the stack of an application. That's hard to parallelize effectively - related stories will affect the same bits of code and cause conflicts. If you can bundle a bunch of related features together based on how they are likely to be implemented, you can slice them up horizontally, and implement the different layers separately, using things like APIs and data models at the boundaries. This paralellizes quite well at the team level.

It's still not great for software design. It's still a very blinkered approach; you're not going to design an application-specific framework that makes implementing features easy. It doesn't allow any space for experienced developers who have foresight, and relies on refactoring to create reusable domain-specific abstractions. But refactoring isn't a user story, and a team munching on stories isn't in a good position to think holistically about a problem.


If you've got a bunch of good, productive developers, you've got to work really hard to get anything but good software out of them. They will probably self-organise into an effective team, whatever the methodology.

On the other hand, if you have a bunch of inexperienced, mediocre developers (myself included) then "bite-size stories being fed on a conveyer belt" is probably a good way to get productivity out of them. It's certainly a lot better than "you have 3 months to build this enormous system based on this 1 paragraph brief" which is pretty common.


You'll get a blob that gets harder and harder to modify over time. For a throwaway system and / or with piles of money, it may work until the system needs replacing.

Different people working on similar vertical slices through the system leads to slightly different parallel implementations and probably some duplication of helper logic. Refactoring won't get scheduled and it'll become technical debt in the way all duplicated logic is: not a big deal to start with, but an increasing source of bugs and features fixed / implemented in one place but not another.

Inexperienced developers won't be disciplined in keeping their abstractions separate; they tend to intermingle their abstractions so that the boundary is fuzzy. Specifically, they lack layering discipline. Instead of libA using libB which uses libC, they'll pass bits of libA into libB and return bits of libC. When implementing a complex algorithm that joins libA to libB, they'll write code that zips the two together within its convolutions, rather than creating adapters for libA and libB so that the algorithm follows naturally. And they'll model the domain, but nothing much more abstract, and write convoluted procedural algorithms in VerbingClasses (new ThingDoer(x).doThing(y)), possibly with interfaces for mockability.

And to be frank, many line of business applications can cope with this. The developers are cheaper and easier to find, and IT was always a cost centre anyway. It's no way to live if you love code, though.


Of course. If we lived in a world where good, productive, experienced developers were cheap and plentiful, we could do things very differently. As it is, it can be more efficient to pay cheap developers to continually patch (or even rebuild) big balls of mud than pay expensive developers to build it properly in the first place.

And I'm not going to complain, because I benefit from the current system.


If you've got a bunch of good, productive developers, you've got to work really hard to get anything but good software out of them. They will probably self-organise into an effective team, whatever the methodology.

I'm sure there is an element of truth to this, but I doubt it's as decisive as you're suggesting in real life.

If you've got a good team of expert developers, people who are both individually skillful at producing useful code but also good team players and able to co-operate, I'd say you have a decent chance of them self-organising. I'd omit the "whatever the methodology", because these are exactly the kinds of teams who don't want or need some consultant's pet methodology limiting their options.

Of course, there is more to building useful real world software projects than just producing a good design and implementation. Even the most technically brilliant team also needs a clear goal to be effective in practice. Capturing the requirements and turning them into actionable specifications is a significant challenge in its own right, one which requires a very different skill set that even exceptional developers won't necessarily have.

I suggest that one of the big differences between a highly skilled and experienced team and a team with more modest capabilities is that the former will immediately recognise the need for clear specs and try to do something about the lack of them. The kinds of development processes and methodologies we're talking about today are designed in part to shield the latter from the same responsibility, but consequently they also rely on having very good people to do that work instead (and by corollary tend to fail hard if the communication with customers and resulting planning work aren't up to standard for whatever reason).


The big difference is that three months into a two year waterfall project, if someone complains that we're behind schedule, no one believes you or gives a shit. They keep hoping for a miracle.

Three months into an agile project and everyone already knows there's a problem.


It's actually the opposite.

In a waterfall project you know you are behind schedule.

In an agile project you haven't planned two years ahead and so you simply adjust future expectations.


And when agile is used in this way, not only can it be demoralizing

You can say that again.


These days agile is as much an excuse fire micro-management as anything else.


"you're not going to design an application-specific framework that makes implementing features easy. It doesn't allow any space for experienced developers who have foresight,"

I wonder if this is an extension of 'programming by poking'[1]. You replace serious thought and planning with a piecemeal, try-it-and-see-what happens process.

[1] https://news.ycombinator.com/item?id=11628080


I don't know. Programming by poking works pretty well in Haskell, a language that's seldom accused of dumbing down. (And they've recently made it easier with typed holes.)


So, prod beta testing? ;)


"It takes longer to define the requirements than it does to code, and once coded they change mid-sprint"

I've experienced one -or- the other of those, rarely both. The only times I've experienced both were due to product owners who wanted to be managers, to bring 'leadership', and so insisted on wasting my time with meetings that didn't actually lead to well defined stories.

If I've had a product owner who wasn't a waste of space, who met with the customer(s), actually understood what they needed, and then met with us to help define a story, then while that story might take a good 10-20 minutes to fully flesh out with good acceptance criteria, it almost never changed during the sprint. We might, on the demo and/or release of the feature, realize modifications were in order, but that wasn't because we did things incorrectly at first; we did them correctly, with something releasable, and useful, and that allowed us to learn something that helped us to refine it.

However, more often I've had product owners who are wastes of space. They'll slap a story together, maybe with acceptance criteria of a line or two, maybe nothing, and then hand it off to the devs to go do. We roll our eyes, make some assumptions for all the missing details, do something, demo it to the customer, and get "That's not what I wanted! Change it!"

The former is infinitely preferable. Even if it's a highly complex feature, that takes a while to get understanding and consensus on, it almost never changes once we do and the devs, customer, and product owner are all on the same page. The problem is it requires a product owner who doesn't suck, and the reality is most of them suck.


Yeah most user stories should actually read like this:

As a product manager who hasn't spoken to any actual users, I'd like the following features to impress my boss.

As a product manager who overheard a Sr Manager muttering something...


Holy smokes, that sounds all to familiar..

We recently lost two or three bigger costumers because of stuff like this.. No one talked to the actual users.. The program was full of functions, workflows and solving problems no user wanted to get solved.

They just never used the programs only for situations like : we have (another new useless feature, please test)


Uh, for me sad part is when I try to make those people write stories in more detail like "what should happen if data fails to load", "what should be default state", "Ok we have adding for an element, how we handle deletes". I got question "So should we go back to the waterfall?". Because we are so agile that not everything should be specified up front. But they do not get the scope, single story does not fall into waterfall...


Yeah, that sounds like a broken process.

The story is the last possible moment of decision making before the developers go and develop something. Obviously it may need further refinement at a later date once you have learned something. But, by definition, you now know -everything that can be known before development starts-, because development is about to start. As such, anything, -anything- that remains unspecified, is, again, by definition, undefined. It may be worth explaining that if failure conditions remain undefined, then you will ignore them for convenience, because whatever you decide to do will almost assuredly be wrong. If the product owners want -any- say in what it does in the event of something going wrong, and don't want it to be a surprise, and require even more work, then now is the time to say something.


> The trouble is that in dynamic environments, the discipline necessary to properly practice scrum isn't really possible. It takes longer to define the requirements than it does to code, and once coded they change mid-sprint.

Scrum is not trictly bijective with agile.

The concept of a sprint creates an entirely arbitrary deadline with an arbitrary box of work.

(I have a dog in this fight: XP with Lean trimmings, since I work at Pivotal and learnt it in Labs)


> It takes longer to define the requirements than it does to code, and once coded they change mid-sprint

It sounds like your product owner isn't doing his job. What specifically does it mean for requirements to change? If the specifications were unclear or incomplete, well you should have held out for clear and complete specifications. If you can't do that you have an organizational problem. But did the customer change their mind about what they want? Not likely, but possible I guess. The only thing left is that the owner didn't really understand the customer's needs.


Agile isn't about predictability so much as it is about 1: ensuring productive work is always going on; 2: ensuring the thing that ships is what the client wants; and 3: ensuring that something functional will actually ship.

Agile grew as a bulwark against the bad old ways of the "enterprise" trenches that led to the original "software crisis". There was a time when big software projects would not deliver, at all, as often as they would succeed. And even when they succeeded they often delivered the wrong thing. Daily builds, continuous integration, using the software itself as the source of truth (instead of elaborately negotiated specs), and focusing on iteration, these are how you can extract productivity from any team, and how you can keep on target and deliver something of value. It's not necessarily the best way, but it's one reasonably reliable way to get stuff done.


>>> We can measure how good our estimates are, we can modify them, we can generate more data and determine what we'll have at a milestone, and then can either push back the due date, or at least recognize that we won't be able to ship the entire feature set at that time.

Good waterfall project managers do that too. You can't bend reality to fit a gantt chart.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: