My job wants us to plan sprints 12 weeks in advance. That includes decomposing epics into stories with full descriptions and acceptance criteria, which anyone who's actually worked on an agile team knows is a complete waste of time. By the time imyou get 6 weeks in things will have changed too much.
By far the most useful thing AI had done for me is let me plow through all that in a fraction of the time I would have before. They're spending a ton of money to make employees more efficient at the pointless bullshit they themselves put in our way.
And before some scrum master shows up and tells me how important stories are, I'm not arguing against planning. I am arguing against pointlessly over planning to make a bunch of suits happy when teams aren't given the kind of time to actually plan that far in advance.
I've tried explaining this to people till I'm blue in the face. It's simply unreasonable to plan specific tickets out that far. We simply don't know what we don't know. And that assumes business priorities will not change and the project requirements will not change (two things that almost always happen). Additionally, the mindset that we can embark on a multi-week/month project and stop/start it at a whim.
Ditto. In my experience it comes from our customer more then internally. They want all the risk reduction and stability provided by the old "waterfall" methods, but with the flexibility and speed of agile. But of course those two things don't mix. You need months if not years to plan a project the way they use to. We can't cram that amount of planning into a week.
Worse yet, once all those stories are made, they don't want us creating more. It takes a damn review board to get anything changed.
This is why I always champion technical leadership. Many, many people seem to think this is unnecessary, but I think it's common sense. The layman might see building software like building a house. We know basically all the components of the house, we know we need to lay the foundation before raising walls and installing a roof. Software development is nothing like this. You can add a roof before the walls, you can dig an entire basement before providing any method of descending into it from the ground floor.
In my company we do have predominantly "technical" leadership, as in almost everyone has an engineering background, but we still run into problems. For software folks that's mainly because leadership either tends to come from other non-software domains (EE and ME mostly), or it tends to be people who honestly weren't very good at the actual engineering side of things to begin with.
There’s all these non-coding elements to a developer’s job that make me think AI replacing developers will take a lot longer than everyone thinks.
What portion of a job must be done by AI before the human loses their job? 80%? Even 98% (and we’re nowhere near that) will produce a ton of friction when applied to a team of developers.
The problem is that the narrative of imminent job displacement is the prevailing one and becomes self fulfilling.
My hot take is that as that percentage increases, salaries will go up asymptotically, until you get to 100%, then they crash to 0. If 80% of your job can be done by AI, I'm going to give you the work of 5 people. When is 99%, I will give you the work of 100 people
If you're okay with the work being done poorly and without review, then sure. Otherwise, it'll take the same amount of time and be done worse. I would not trust solely 1 person to review 5 people's work let alone 100.
You’re arguing semantics. OP is hypothesising a future where the quality of work is comparable to that of a human. If you don’t believe that that’s on the cards, just say it, but you’re intentionally misrepresenting the hypothetical.
If 80% is “done by the AI”, who is responsible for the certain failure on behalf of the AI? Given inference often is, >0%, wrong — in a word… hmm.
How many 9s until you’re comfortable? Even then, knowing 1000 tasks could likely have at least 1 foundational issue… how do you audit? “Pretty please do the needful” and have another “please ensure they do the needful”. Do you review the 1000 inputs/outputs processed? Don’t get me wrong, am familiar with the “send it” ethos all too well, but at-scale it seems like quite the pickle.
Genuinely curious how most people consider these angles… was tasked with building a model once to perform what literally could’ve otherwise been a SQL query… when I brought this up, it was met with “well we need to do it with AI” I don’t think a humans gonna want to find that needle in a haystack when 100,000 significant documents are originated… but I don’t have to worry about that one anymore thank goodness.
We don't know if it will take the form of a net drop in headcount at all, or if white collar labor will be broadly replaced with the same headcount of low paid fungible AI operators.
But we can say that mass displacement of labor in one form or another is the goal because it's the only way to explain the amount of investment that's going into it.
By far the most useful thing AI had done for me is let me plow through all that in a fraction of the time I would have before. They're spending a ton of money to make employees more efficient at the pointless bullshit they themselves put in our way.
And before some scrum master shows up and tells me how important stories are, I'm not arguing against planning. I am arguing against pointlessly over planning to make a bunch of suits happy when teams aren't given the kind of time to actually plan that far in advance.
reply