On Cleverness

Please apply your cleverness not to write clever code, but to find clever ways to avoid the need for blatant cleverness.

Writing clever code is easy.

Writing clever code is fun.

But reading clever code is … frustrating and miserable and stupid.

There are two kinds of misguided cleverness I'd like to talk about today:

  • the Junior Cleverness: obsession with brevity and efficiency at the expense of readability.

  • the Senior Cleverness: obsession with abstraction and frameworks at the expense of simplicity.

I've committed both of these sins. This post is a confessional, and a reminder to myself to keep it simple.

Efficiency and Optimization

Falsehood: The shorter the code, the faster it runs.

When I started out programming, I had a totally warped idea of which kinds of operations were fast and which were expensive.

But the very idea of efficiency is often a red herring.

How fast is the computer? Fast enough.

Right now there's no need to optimize. Most code is not performance-sensitive, and normal code is just fine.

To really optimize code, we need to know four things:

  • How the code works. For that, the code has to be clear and simple. The data flow and control flow should be straightforward. The code has to be reasonable.

    If you want to optimize some code, it must first be written for humans.

  • What kind of performance we need. Optimizing without a goal is not going to lead anywhere.

    This is about the requirements of your software. Software development is more than coding, it is fundamentally about understanding the needs of people.

    Performance also needs to be measurable. Adopt an empirical mindset: If it can't be measured, it doesn't exist.

  • Which code is critical. Optimizing a line of code is useless if the program only spends a microsecond there. You will only get big wins if you tackle the parts where big wins are possible.

    You will need to get familiar with a profiler. Profilers are good at telling you how much time is spent in which part of the software, ideally with a per-line view. However, they are not very suitable for absolute performance measurements because the results tend to have a high variance.

  • How to avoid doing work. The big wins don't come from doing the same stuff faster, but from doing less work. In very hot spots, micro-optimizations do have immense value. I once sped up a Python program 3× just by hoisting a getter method out of a tight loop. But most of the time, this means using algorithms in a better Big-O class.

    This is the part where cleverness is appropriate. But you'll want to focus your cleverness on the algorithm, not on understanding the existing code. That's why clear and simple code is a precondition for fast code.

To summarize: efficiency is good, but it's not an end in itself. To find good optimizations, you must first understand the code, both how it works and where the critical parts are. All of this is much easier if the code is easily readable and well-designed: Before you can optimize for computers, you must optimize for humans.

Elegance and Brevity

Falsehood: The shortest solution to a problem is the most elegant solution.

Code is where bugs breed, so less code is better. But it's not really about the size of code. It's about simplicity.

The code you write is important. These ideas need a bit of space. The formatting should emphasize your intent, whether through indentation, vertical alignment, spacing between words, or blank lines.

Trying to write less code by choosing cryptic variable names or a dense code layout is not going to help, on the contrary.

I think this is a bad example set by mathematics, where an extremely dense symbolic notation is common. A shorthand is useful where you are using a couple of well-defined concepts very often. For example, writing a + b instead of add a to b.

But in programming, we are often dealing with a lot of concepts without understanding them in their entirety. Even if you write code in a mathematical fashion where each class and concept is defined before it is used, it will not necessarily be read in that way. When debugging, I'm tracing the control flow from function to function, without taking in all the context.

That is why function names and variable names should describe what the purpose of this variable or function is. This way, a name carries a bit of its context around, wherever it is used.

I sometimes use variables just to provide a bit of context, not because I need a variable here.

An example! In the software powering this blog, I have the concept of “drafts” that should not be published, except in development mode. So I have this piece of code:

sub add_page($self, %args) {
    my $page = $self->deps->new_page($self, %args);

    my $should_publish = ($self->publish_drafts or not $page->is_draft);
    return unless $should_publish;

    ...
}

Is the $should_publish variable necessary? Absolutely not. But I remember writing that code. I was terribly confused by the double negation return unless $self->publish_drafts or not $page->is_draft. What does that mean? Sure I can figure it out, but code ought to be simple. Code should be obviously correct. If I have to think about it, there might be a bug. And I don't want to accidentally publish my embarrassing fan fiction just because of a little bug.

So I took that statement and split it up into smaller pieces I could understand, and gave a name to each chunk. The code might be a bit longer, but for me it is so much easier to understand.

(By the way, splitting code up into simple chunks is also great on a per-function or per-method level. As an added benefit, it makes unit tests much easier. I've written about this in Simpler Tests thanks to “Extract Method” Refactoring .)

“Obviously correct” might be the gold standard for clean code. Everything – design, structure, layout, and naming – should aim to increase the clarity.

Abstraction and Indirection

Falsehood: Abstractions simplify your code.

I've previously talked about extracting methods. But that also has a dark side: it is easy to go way overboard with so-called simplicity.

Beyond some point, methods become so small that they are essentially empty, only calling one or two other methods. They don't provide real value. They are all bone and no meat.

We introduce abstractions because they make something easier. Details are delegated to another function, so that this code can focus on the important high-level parts – the important code is made more cohesive.

But every time we introduce an abstraction, this has some cost. There is a performance cost, though as discussed above that is not necessarily relevant. More importantly, each abstraction adds cognitive overhead.

When we use an abstraction, we have to know what it does. By using a good name for the abstraction this can be made more obvious, but in general we'll have to read the documentation. And when we have more abstractions, we have to juggle more and more of that context around in our head.

Each abstraction needs to earn its worth. The cost of using an abstraction must be less than not using it. This might seem obvious, but too often using an abstraction ends up making the code more complicated.

I have to call out Java here. Java has a very comprehensive standard library. But it's also full of sometimes unnecessary design patterns. In some parts, you can't just create a new object, oh no. You have to create an ObjectBuilderFactory first. But since there's a singleton instance of this factory that just creates the default implementation anyway, all of that is unnecessary ceremony that doesn't actually buy you anything. JavaEE is particularly bad for this misguided abstraction fetishism.

Abstraction also comes at a cost when debugging code. Have you seen the stack trace of a production-grade Java app using one or two libraries? The stack trace is easily a hundred call levels deep. Have fun finding the cause of the error, because most of those levels are mostly empty, and relevant context is split across multiple functions.

The excellent Reasonable Code blog post by “JimmyHoffa” suggests that short call stacks (i.e. fewer abstractions) are an indicator of good code.

Even if abstractions are created with the best intentions, they don't always work that way: a leaky abstraction has all the costs but not the benefits of a good abstraction.

A leaky abstraction has some extra requirements that are not part of the abstraction. This could be some implicit state. Maybe there are some extra dependencies that you have to configure first. In most cases this is just due to insufficient documentation. With a leaky abstraction we have to understand how it works in order to use it correctly. Such an “abstraction” isn't any abstraction at all.

If we want to create a good abstraction, we should design them as if we meant it. Effectively, the abstraction should like be a self-contained library: with a well-defined interface, with clear documentation, with a test suite. If we can't go through all that effort, it might be better to not use this abstraction at all?

Frameworks

Falsehood: This will save time in the long run.

When writing code, we see things that could go wrong. Or we anticipate how the code might evolve. There are two responses:

  • “Eh, that will never happen!” That is irresponsible.

  • “Let's solve it the right way, once and for all!” That is how frameworks happen.

The problem with doing it “the right way” is that we don't really know what will be needed in the future. This will be blindingly obvious in retrospect, but it is super tricky to make the right predictions.

The idea is of course very good: we can encode our experience in a library, make this knowledge executable. And once a problem is solved correctly, it can be reused again and again.

But because our design decisions are likely to be suboptimal, we will be locked in to these bad decisions for a long time. There might be a use case we didn't consider, and now have to fight our framework.

These frameworks and libraries are also prone to scope creep. What was once a bunch of small utility functions slowly grows into an ultra-configurable platform. But this platform is now so configurable that it's essentially a programming language of its own, and using it takes just as much work as simply writing the code in the original system.

This inner-platform effect has been ridiculed many times, for example as Greenspun's tenth rule:

Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Scope creep is also the subject of Zawinski's law of software envelopment:

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

But these problems are real! One time, I was working on form validation with the Mojolicious web framework. It all started by writing a few validation helpers. But when I awoke from my coding frenzy, I had written a meta-object system complete with inheritance and reflection capabilities just so that I could shave off a few lines of code. That's no good! Just writing the boring code would have been easier and more maintainable.

The lesson here isn't to stop writing clean code. But maybe, the best answer to “what if…” is to throw NotImplementedError(). Don't try to handle every eventuality, just leave a note for the future. This does require sensible-ish code so that the design can be refactored in the future, but if in doubt: YAGNI.

Conclusion

So there are a lot of unhelpful ways to be clever. Is there a cure? I'm not entirely sure.

Code reviews seem to help. Not just other people going “WTF” when looking at my code. Also, me reviewing other people's code. With a bit of distance, it is easier to spot potential problems. And this experience is transferable: You can also put on your code review glasses and think about your own code: Would I want to maintain this? How can I make this easier for future readers?

I also keep coming back to this wonderful quote by Brian Kernighan:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

Maybe, with a bit of reflection and humility, I'll manage to write less clever code from each year to the next.