Hacker Newsnew | past | comments | ask | show | jobs | submit | ssi1111's commentslogin

Fake agile does not work. Large enterprises come with all their problems in processes, tools, skills, etc. They hear this new buzz word and latch on to it. They take this new thing to their teams and ask them to implement it. And all this while, the upper management just knows how to pronounce the word "Agile"; they have no clue what it means or the work needed to "implement Agile". So upper management is never in a position to help the teams. Most times, management does not want to change the existing tools, processes or grow people's skills. And so agile fails - because people want the benefits of agile without doing the work.


I've got a coworker (absolutely brilliant fellow) who bristles whenever he hears buzzword-compliant terms like ITIL and Agile brought up anywhere near management.

Not because those terms mean anything even remotely bad (in fact, he agrees with a lot of the philosophies), but because those terms come pre-loaded with a lot of baggage and preconceived notions by management, and saying that we have an agile-like process is just begging for some C-level with not enough work to do, to get involved and begin dictating requirements they don't understand.

I doubted this until I saw it happen personally a couple of times.

The process is completely, totally meaningless (and in many cases actively harmful), unless you also get the culture change to go with it.


Either this was poorly written or the author just doesn't get it. Developers shouldn't do TDD or 100% code coverage because they want to build a perfect system; developers should do TDD because they understand that they can never build a perfect system and the T in TDD protects them from complete failure. This means you never attempt to build a perfect system; you only attempt to do just enough. What is just enough? It is the amount of functionality to make the tests in TDD pass... never the amount of functionality to make others 'fall down on their knees and claim “we’re not worthy!”' Thought I'd also add that technical debt is unavoidable, but we realize it is not OK to have technical debt (the article concludes that technical debt is OK) and that is why we back it up with tests. These tests are like collateral for a loan (technical debt).


Commit to 1 dirty branch; assume it is broken and let tests prove otherwise. When tests pass, you know it is a working branch, tag and release. I skipped a few steps for simplicity. Here, merges are an exception, not the rule.

This workflow won't work without automated tests... is lack of automated testing the reason to have individual branches for features and bug fixes?


How do you test the fixes? Which branch gets deployed to the test environment?


Everyone runs dev environments locally and tests. A dev environment is really easy to set up with mongo and node. Occasionally, we’ll put stuff on a staging environment if we want to test a database or infrastructure change. Big, experimental client changes go to the alpha channel on production.


So when you have 2 features/bug fixes in the same area, built in 2 different branches and deployed to isolated dev/test environments... the first time you might find out that they dont work well with each other is in Staging? Why wait for that when you could deploy to a common environment and catch issues sooner?

Btw... I am trying to get some teams in my company to get out of branching. So trying to understand your view (which is exactly what these teams are doing), hence these questions...


That rarely happens in our experience. With our internal board, we know who is working on what and in what area of the code. Plus, all code is reviewed by someone working in the same area.


I agree. I would pick 1 dirty branch with continuous builds and tests any time over several branches. I have worked with several branches too and it is always messy, lots of confusion and too many overheads (including the Release Manager guy).

I love trello... but I don't like the branching model... :)


Towards the end, he says "...but I do enjoy the Gherkin syntax. Not for testing, but for gathering feature requirements..." and this is where cucumber or fit or FitNesse is useful - you gather requirements, then because these requirements can be run as tests you start running these requirements as tests and now you have a living document of your product and product owners can read the tests (which are the requirements). If you look at such tools as mere test runners, then they do add an extra layer of complexity, but these tools were not meant to be just test runners... so you are using it for the wrong thing... :)


I think you should have a form where users can try out your address checks. This is just so devs can quickly make sure if you know the difference between good and a bad address before they try out the api.


If you want to try some out via curl it is pretty quick. See https://www.geteasypost.com/docs#addresses

I do agree tho and a quick form to test out all of the APIs is in the works so users can quickly see what the API will return.


I am a big fan of TDD. I read this post to see if I can get any reasonable arguments against TDD, so I can be better prepared next time I am trying to sell TDD. #10 - 'no clients' and #8 - 'short project' kinda make sense to not practice TDD, but these also mean that there is no real product - you build something and you throw it away after a few days. So there is no need for TDD, testing or anything really. All the other points in the post don't make any sense :)... is it just me?


The challenge is going to be to find 2 people that use blackberry...:) j/k...


You made my day :)


If Groupon is a failure, I'd like to have several of those...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: