Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Counter: Refactoring is far, far, far cheaper than duplication or wrong abstraction.

Duplication means you lose the wisdom that was gained when the abstraction was written. It means that any bug or weird cases will now only be fixed in one place and stay incorrect for all the places you duplicated the code.

About the rule of three: I personally extract functions for single-use cases all the time. The goal is to make the caller be as close to pseudo-code as possible. Then if a slightly different case comes up, I will write the slightly different case as another function right next to the original one. Otherwise, the fact that you have multiple similar cases will be lost.



Yeah, the rule of three is misleading: having a name for three lines of code that do “one thing” is almost always a win and nothing prevents a future developer from either inlining that function, if it was a bad idea, or duplicating and modifying the function.


Counter-counter:

Refactoring is by far the most expensive and error prone activity in programming. It can also be one of the most valuable. But unless it's trivial, it's the most mentally arduous and time-consuming work you do as a programmer.


I disagree. I think debugging is the most expensive activity a programmer ever does. Refactoring is a luxury that you have when you don't have bugs or time pressure to ship/fix something. Debugging potentially requires you to load the entire context of the (incorrect) program into your head, including irrelevant parts, as you grope around to figure out a.) what actually went wrong, b.) why it went wrong, and c.) how to modify the existing system in a way that doesn't make it worse.

Debugging is reverse-engineering under the gun. It has huge cognitive load. Especially debugging a production system with a difficult to reproduce bug in a deep dark part of the code. It's a nightmare scenario.

Refactoring, on the other hand, often happens with incomplete knowledge, and can be quite local. I've seen zillions of refactorings that are done with incomplete knowledge that are local improvements (and many that were not global improvements).


I don't think we're disagreeing.

When I say non-trivial, I don't mean local refactoring. I mean the kind of refactoring that requires you to load the entire system (or a large part of it) into your head, and figure out how to clarify and simplify it.

It is not a luxury. When done successfully, it is the only way to lower the cost of that expensive debugging. The slow debugging and the expensive refactoring are two sides of the same coin. They are both the cost of a system that is too difficult to understand and safely change. But the cost of a good refactoring need only be paid once. Whereas the cost of debugging a system you refuse to fix is levied again and again.


Refactoring is only error prone if you don't have integration tests. The advantage of extensive integration testing is that you can relentlessly refactor without fear of breaking things.


I'd much rather have them than not, but don't fool yourself into thinking you can refactor without any fear because you have integration tests.

No matter how many you have, they'll only be testing a tiny fraction of your possible code paths.


Tests are only sufficient if they tests cover the failure modes of the new abstraction. That is very often not the case.

Tests still help a lot, but they don't reduce the risk to zero.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: