Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Duct Tape Programmer
62 points by 8ren on Oct 9, 2010 | hide | past | favorite | 42 comments


I have a similar feeling towards TDD. I use test driven developments for those parts of my code that needs it. I recently had to create a function that did some complex calculation. TDD was great for that, because I had a set of known inputs and a set of known outputs. It made sense to write the test before the code.

But I will not write trivial tests just to make the unit test coverage increase. I've seen tests written for simple getters/setters that just sets "this.birthday = date". If we do contact an alien civilization that have multiple birthdays, we will probably have to change the code anyway and the unit tests will have to be rewritten.

I may get some negative votes for saying this, but whenever I see a project with 100% unit coverage it's a red flag for me. This means they have wasted a lot of development time while writing trivial unit tests for trivial code.

Unit tests are great for parts of the code that needs them. But if your only reason for writing a unit test is to see the statistics "improve", you are wasting effort.


I agree with you about getter/setter and trivial code. But sometime when you're a VP of Engineering and you knew that most of your developers will slack off eventually and start not writing unit-test guess what you'll do?

You set the code coverage bar high enough.

While I disagree with the said VP or Lead or Senior people, but sometime we have to admit that not many developers in our industry care about quality and have the attitude of test is important.

I get that shipping is more important than anything else. But I found people often use that excuse for not writing test. Besides... the definition of "Done" pre-TDD/Agile is "I wrote my code albeit not thoroughly tested, now let me throw this pile of crap over the wall to the QA".

It's just how our industry used to work until recently.


Feature, Quality, Time - as a VP of engineering, you need to pick two of them.


Quality and time. Feature is not done when the quality is not achieved.


A good reason to write tests for seemingly trivial code like that is if you're implementing an API that others will use. I've re-factored a class that implemented a public API and been saved by trivial tests that pretty much just tested getters/setters.

But there's a time and place for everything. I rarely pass 95% test coverage, the remaining 5% are usually not worth pursuing.


If you're using tests to check your API integrity, ok, but writing tests by hand for getters and setters written by hand would be disgusting. Hopefully a very minor declaration in the code can automatically create all the tests.


I feel this is the responsibility of an audit system and not the code itself. I have never understood why people write code to verify a contract when it should be done as a rule in a proper SCM system.

A better process is to write or purchase plug-ins for your SCM system that audit all commits and either notify of a contract violation or refuses commit without a supervisors override. Enforcing contracts is not the domain of custom code it is the domain of SCM and therefore should be implemented at an SCM level. This way you write it once and everyone benefits whereas with test you are a. writing test for every contract and b. leaving in the hands of the current developer to ensure the contract is met. Further by having it audited you can build a process around that audit to guarantee that the issue is accounted for.

I am not sold on TDD, but I have not thrown it out as snake-oil yet either. It is in no way the harbinger of quality it was heralded to be, but logically there are some places that it seems like it can and does help but my general feeling about it in the way that it is sold (test everything), is that it is a wast of time.


Having 100% test coverage for a product that nobody wants is a waste of time, while having 0% test coverage for a product that many people are using is dangerous.

The challenge for a startup is finding the right balance between the two.


This is the most irresponsible thing Joel Spolsky has ever done, and I lost all respect for him after he posted this essay. My bosses at my then-current workplace began using this article as an excuse not to do proper software architecture.

I ask you, in what other industry would doing shoddy workmanship be considered an insightful viewpoint? Would you be happy if you discovered you'd hired "The Duct Tape Plumber" to fix a leak in your house?

Furthermore, I've actually known real people like the ones Joel Spolsky idealizes: "He is the guy you want on your team building go-carts, because he has two favorite tools: duct tape and WD-40." Spolsky is wrong wrong wrong: you do NOT want this guy on your team. He's the guy who ruins bearings because he uses WD-40 instead of a real lubricant, the guy who spends 40 minutes crafting a duct tape solution rather than take a 20 minute drive to O'Reilly's to get the right part. I had a guy like this work on a truck I used to own; we accidently bought the wrong heater core but rather than get the right one, he figured out it could fit backwards if we cut the panels around it and routed the hoses around it in a loop, and then he covered it in duct tape. The problem is that it wasn't really air tight, so it was no good as a heater because it didn't force the air through. I had to redo it myself with the right part, and had to reattach all the panels he cut.


Remember that Joel Spolsky is a brand. He's got products and sites to sell. People should take his articles for what they are - opinion and marketing. Good for some, not for all.

In many cases, he has valid opinions, but just like in medicine, more opinions should be sought before making a final decision.


This is an interesting perspective; I took the opposite one. To me, the "duct-tape programmer" that Joel described was not a "duct-tape" programmer at all. He was just a programmer that knew the field and knew when and when not to apply certain languages and methodologies.

Certainly we'd all be happy to have jwz onboard our teams, but Joel holds him out as the one exception of a "good" duct-tape programmer. It's just really absurd. Joel's position seems to be "anyone that does not follow the dictates of the latest MEGA XTREME WATERFALL AGILE METHODOLOGY is going to do more damage than good", when really, like in almost everything else, the passing fads are merely empty promises hyped on to make consultants and specialists rich.

You may want to read jwz's response, I think it is fairly concise and good: http://jwz.livejournal.com/1096593.html


That story reminds me of my own heater core troubles. I had an old car that sprang a leak in the heater core, dumping antifreeze all over the inside of the car. The cost of having someone replace the heater core and "do it right" was more than the value of the car, and I live in Florida so heat is really only something it would be nice to have 2 or 3 days out of the year. I didn't use duct tape, but I did use a $2 part and some zip ties to bypass the heater core entirely. The car lasted me 3 more years until I got tired of the other problems it was developing and finally decided to give it to a friend.

What's the point of my story? Perhaps duct tape isn't always the best solution, but there are certainly times where the application of duct tape (or zip ties) is an appropriate response, the trick is identifying when and where, and being willing and able to do it when it makes sense.


"The problem is that it wasn't really air tight....."

So the problem was that you did not use enough duct tape.


Okay, first off, this article is interesting but very ancient. And I remember the shit storm it caused quite fondly :-)

A couple of obvious points have been made at the time. Yes, it's a gross generalization. Yes, somewhere in there is an uncomfortable truth that is liable to make a couple of Java and COM artists very angry.

At the end the whole point boils down to the overused saying: real programmers ship. It's as simple as that. All this artificial distinction between "Duct Tape Programmers" and whatnot is just an attempt to elaborate somewhat unsuccessfully on a central experience of software development.

This is why: Our code sucks, and there are many reasons for it. We may have some control over WHY our code will suck, but the core fact is just inescapable. Of course, there are people who claim their code is always beautiful (often paired with energetic statements about some framework technology) and it's worth noting that those developers are easily the worst of them all. At least the rest of us have some semblance of self-awareness.


It boggles my mind that an article from barely over a year ago counts as "very ancient", that people clamor to add "(2006)" to article titles, etc.

If more programmers actually read stuff that was published longer than (gasp!) five years ago, perhaps there would be less naive noise about how New Methodology X is going to finally fix all these nagging issues in software development once and for all.


In Udo's defense, this article did cause a stir. That does make it seem like "ancient" history...

Peter Seibel tried to "unpack some of the context," it's probably a good place to start for anyone who's interested:

http://gigamonkeys.wordpress.com/2009/09/28/a-tale-of-two-re...


What? How? It was only a year ago. Is Windows Vista ancient history? Sheesh.


Is Windows Vista ancient history? Sheesh.

If only wishing made it so!


Thanks for the demote, but I wasn't bashing the article on grounds of its age. If you had actually bothered to read the rest of my comment it would have become obvious that I was referring to the fact that people dig old stuff up and present it as news when the discussion itself has already moved on towards more insightful conclusions a few months ago.


I'm not demoting you (I mostly agree with you, actually), just noting that part of that lack of self-awareness comes from how many programmers seem to consider anything written more than a couple years ago obsolete.

It makes sense when dealing with surface details in rapidly changing web APIs, but otherwise that approach condemns programmers to starting from zero and wasting time re-discovering fundamentals.

"Mathematicians stand on each others' shoulders and computer scientists stand on each others' toes." — Richard Hamming


Sorry, I misunderstood then (probably because my comment got negative karma at the same time). To clarify: I don't think anything written more than a couple years ago is obsolete. But I do feel that this particular article by Joel already caused maximum havoc and people have moved on from it for a good reason. Otherwise I agree with you as well :-)


My comment was nothing personal about you, but about the industry at large.

While I've been programming since I was a kid, my academic background is in history - Viewing the industry through that lens reinforces how it's collectively in an early phase where people still fall to their deaths trying to fly with wings made of wax and feathers.

There have been plenty of people with real breakthroughs, but the average programmer probably hasn't even heard of them. I mean, their work is already obsolete, right?

See also: http://news.ycombinator.com/item?id=1743145


An opposite to a duct tape programmer is the "Cargo cult programmer".

He will research long and hard on the conditions in which programmers perform the best, then try to break programmers down to categories and figure out which one he fits into the best. Then he will research the psychological flow needed for programming and spend long hours planning out the IDEAL setting for the "flow to kick in". Then he will write a couple of articles on much flow rocks and how awesome it would be if everyone could achieve the perfect flow, and then go to sleep.

No actual programming (or even architecture astronautics) has been done during this process.


Here we go again: gross oversimplifications from people who make their blogging money by being controversial rather than correct.


Is that a bad thing? :) Seriously, posts like this often enough cause a lot of interesting discussion. I usually learn a lot reading well thought-out responses to Joel's posts (though usually not from reading those posts themselves).


I liked the article and I think it epitomizes this bullet list I saw here a few days ago

1. Make it work

2. Make it right

3. Make it fast

All Joel is saying is that If you're up against a time constraint (like trying to ship code) ... its okay not to get to 2 or 3 right then and there.

I think thats a good lesson for hackers to learn.


Link to original?


someone left it in the comments.


> "But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical." [...]

> Remember, before you freak out, that Zawinski was at Netscape when they were changing the world. They thought that they only had a few months before someone else came along and ate their lunch. A lot of important code is like that.

It makes sense that, under these circumstances, their #1 priority was to ship, to bring a product to market before others did. Even if the first version sucked, it would still be better than no product.

Unfortunately this is exactly the kind of situation that gave us IE, and we still have to deal with the after-effects today. I wonder if this would count as an argument against the often-touted claim that "competition makes products better". It sure didn't cause browsers to be better in the late 90s. In fact, we didn't start to see decent browsers until after the browser wars, after IE's dominance had been established.


I've worked with architecture astronauts before.

We were a bootstrapped startup, and the need to use bleeding edge tech (our product was hardly rocket science) reinvent the wheel (hey, look at this - I spent a week and created my own grid control!) and a general need to overengineer seemed to blind the astronauts from the fact that if we didn't have anything to sell, we were going to run out of money in a matter of one or two months.

We were "using" agile, and the lead engineer's approach to test cases was "well, if the function failed the test, let's just change the test to make it pass". Sheesh.

Needless to say, the company struggled to meet payroll for an extended period time since we couldn't release anything to generate revenue.


I work on enterprise software in a financial company, and IMO, this quote from Zawinski would lead to a dangerous short-term view in that context: "unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that."

When you work in enterprise software, this approach might work on the first project, maybe the second, but eventually a point will arrive where a requirement will take longer to deliver for the simple reason that there are no unit tests. Without unit tests, bugs will be found later in the project life-cycle, possibly even after delivery. If customers have delivered software that is unusable, the fact of delivery becomes a negative pretty quickly..

On the positive side though, the tension between what's essential for the customer, and what developers need to learn to keep their careers alive is an interesting idea, and is something we've been looking recently at where I work. Is it a good idea to introduce a technology just because a programmer feels s/he should learn it? I'm not so sure.


If you need unit tests, write unit tests. If you don't, don't.

So in your case you would notice that for iteration 3 you need some unit tests. Then you can write them in iteration 3. You didn't need them in iteration 1, hence no need to write them in iteration 1.

The concept of technical debt will probably inserted into the discussion here. I think it is irrelevant. Yes, there is always technical debt. For example, I currently have a technical debt of several billion dollars because I haven't installed big data centers like Google or Apple have. So should my web app ever need to scale to a gigantic level, I'll have a problem. But not now. So it is a debt I can live with.


The problem with that train of thought is that the point when you realize you need tests (you're shipping software with stupid bugs that could be prevented with a few tests) is often after the point at which it's easiest to write them. I'm not going to be the TDD zealot and say you need 100% coverage and you need to have your tests written up front. However, I will say that sooner is usually much better than later when writing tests.


Guess what most developers do?

Let's admit it, most devs don't test. There. I said it. Most devs just want to write more features, more new code, more design patterns, more cool algorithm.

Dev 1: "Testing? Testing is not my job. Go ask the QA to do it".

Dev 2: "Yeah... we're just not born to do testing, it's just not how our brain is wired".

Dev 3: ... more excuses why dev shouldn't test ...

I get it that devs aren't good at usability testing. But the majority devs I met don't even want to test their own code. Some don't want to write unit-test, some don't even want to test the functionality of the code they just wrote.

sigh


I think there is something incredibly powerful about a continuous integration system that is fully bootstrapped. Being able to know the instant something is broken and run a single build script after a SCM check out to have a fully functioning system should be the goal of every enterprise system, IMO.

Proper unit tests with broad coverage are foundational to both of these ideas.


In my opinion less code means less bugs means better code. Simple as that. And by code I also mean everything that is auto generated, inherited or linked.

Of course there are rare cases that you need to trade size for speed. But it should be done with careful cosideration.

Edit: I forgot to add that I always thought that 'duct tape programming' means writing hacks to fix hacks - which is bad.


can we have the link updated?

when I was linked from my rss reader (just now-ish), it was set to go to: http://localhost/www.joelonsoftware.com/items/2009/09/23.htm...

should be: http://www.joelonsoftware.com/items/2009/09/23.html

thanks.


sorry, HN doesn't let me edit the link.


I find it interesting to compare Spolsky's idealized programmer with the one Michael Lopp gives: http://www.randsinrepose.com/archives/2005/03/20/free_electr...

Personally, I think these pieces say more about their authors than they do about good programmers.


Short term prototyping only. Anything long term and big--- and you'll be hating yourself for not having clear standards and patterns. :P

Keep in mind you can make something that is overengineered suck hard too. There is a happy medium between over engineering and hacking together crap.


I wrote a response to this some time ago here: http://www.deserettechnology.com/journal/joels-duct-tape-pro...


Software moves so fast that you're always doing it wrong according to someone "in the know."

The point is to have a whole bunch of tools at your disposal, the knowledge to correctly use each tool to it's maximum effect, and the courage of character to use that tool despite its unpopularity.

If you take away from this article that shipping trumps all, or COM multithreading sucks, or templates in C++ are buggy, you're missing the point entirely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: