“Refactoring” brings out my inner skeptic. I’ve heard too many Agile enthusiasts who sound like they code at Hogwarts, waving their wands while yelling refactorum! at badly written but functional code so it magically realigns itself into a form that adheres to good programming standards.
People Who Know Such Things tell me I’m not being entirely fair, as if being entirely fair is something People Who Publish Blogs are supposed to aspire to. At times, they say, getting code to do what it’s supposed to do first, and then rewriting it into a better form can make more sense than trying to write it both right and well at the same time.
So long as what goes into production is both, and not incomprehensible gibberish that passes all tests without anyone knowing why, it probably doesn’t matter. If the plan is to refactor the application later on … after, say, the entire backlog has cleared … my guess is that refactorum is akin to how I used to build electronics back in my electric fish days … by hiding an untangleable collection of wires inside a nice-looking box with Letraset labels, hoping nobody would ever need to open things up to fiddle with my creation.
Hold that thought.
One of the thirteen principles that make up the KJR Manifesto is that to optimize the whole you have to sub-optimize the parts. Put in organizational terms, it means that if each organizational silo worries only about its own problems it will take actions that do more damage to the enterprise than can be paid for by the benefits they provide to the silo.
This is a bedrock principle. Take it to the bank. It’s been demonstrated to be true in a wide assortment of engineering contexts, organizational engineering being just one of them.
It appears to conflict with another bedrock principle: That empowering individual employees to find innovative solutions to the problems they deal with every day gives companies a game-changing competitive advantage. Why that is ought to be obvious, but just in case: (1) The more employees a company has trying to make improvements, the more improvements it can make in a given unit of calendar time; and (2) an employee close to the action is in a better position to figure out what’s likely to actually work than a member of a small elite improvement team who has just become an instant expert in the area.
And yet, when a company allows individual employees to innovate, doesn’t that risk optimizing a part at the expense of the whole?
The answer lies in refactoring. When refactoring, the new-and-improved version of software has to pass the same tests as the code it replaces. Which means it has to handle the same inputs with equivalent logic, processing it to create the same outputs.
Empowered employees who find ways to improve things are subject to the same constraint: For the most part they’re responsible for finding ways for their area to process the same inputs, whether inputs are raw materials, forms, or what-have-you, turning them into the same work products as before.
Turning them into “better” work products is another matter. If “better” simply means fewer defects, there’s no problem. But if “better” means a change in specifications of any kind, external review may be needed. That’s because every employee’s output is another employee’s input, and no how much better something is, it being different in some way could mess things up somewhere down the line.
Nor is that the end of the matter. Imagine, for example, that the area in which innovation is happening is Application Development, and the innovation in question is to refactor all algorithms in APL instead of the company’s current standard development environment, wrapping them in a REST bubble so they interoperate without difficulty … refactoring refactoring, as it were.
Talk to APL developers and they’ll uniformly affirm that APL lets them develop twice the functionality in half the time. That’s an improvement to take to the bank, isn’t it?
Not necessarily. APL is known for fast development and efficient execution. It’s also known for being write-only code, and that’s if a second APL developer looks at it.
Add the need for ongoing maintenance to the equation, and it’s a more complicated equation.
Not so complicated that stifling employee innovation is the right answer. But complicated enough that pure “business refactoring” with no oversight probably isn’t the right answer either.
Great article. BTW, anyone who is doing agile and refactoring only “after the entire backlog has cleared” is doing agile like this.
APL? I’m impressed.
refactorum!
I’ll have to remind myself not to be drinking coffee when I read the first paragraph of this newsletter. Something always cracks me up.
I’ve had too much experience trying to manage software refactoring, but never saw the interesting analogy you draw to the business side of things…. but there’s definitely write-only business processes out there too.
First, fixing it later never works – there is always new work that is critical to be done. Fixing it later means, we hope it goes away but if it is a problem, we hope someone smarter than us will fix it.
I believe writing code in a good form makes getting it to work easier. The reason for this is that if something doesn’t work you can get another person to look at it, understand what you did and help find the solution.
If you aren’t writing easily understood code – good luck with refactoring it. BTW I’ve seen extremely documented code that didn’t work the way the documentation said, so documentation is not the answer either.
It requires knowledge of the end result, good programming practices and a little bit of art to write code that is easily understood and maintainable. Standardization helps and by that I mean that any code needs to be internally standard to itself.
I can’t tell you how many times I’ve had to ‘fix’ code written by a contractor who was unfamiliar with the business requirements and the fixes usually end up being to remove an extra bit of code the developer put in. It’s one of my favorite ways of editing code. (smaller is often easier to understand).