Why Spectre demands more elegantly coded software

Write it well, write it efficiently, write it as though the underlying hardware isn’t going to get any faster. Because it isn’t

Why Spectre demands more elegantly coded software
Thinkstock

For the foreseeable future, developers are going to have to get used to coding for slower hardware. Leaving aside the headline slowdowns seen by some systems with Meltdown patches applied, the longer-lasting problem is Spectre. As its prescient namers realized, this flaw will haunt the IT world for years to come.

Spectre is the gift that keeps on giving. To mitigate against it requires recompiling applications with new instructions that work around speculative execution vulnerabilities. But that’s just putting a sticking plaster on a festering wound. Fundamentally we need new processor designs, ones that work differently, since what we have now just isn’t secure. Unfortunately, new CPU designs aren’t likely to appear any time soon.

For many development projects, the lack of total security may not matter too much. The risk of compromise is fairly low and not proven to be present in the wild, at least not yet. But mission-critical applications require a higher level of data security than “It’s probably OK.”

There are some Spectre-immune systems, but they tend to be slow or old or both. The Raspberry Pi is one and I’m writing this article on another, a pre-2013 Intel Atom box. It doesn’t do out-of-order (OoO) execution, but therefore it doesn’t do anything particularly quickly. This is now the choice facing everyone who cares about security: fast and flawed or slow and safe (or at least safer).

Until that changes, until a new generation of chips somehow circumvents the flaws and gives us full-speed computing without speculative execution vulnerabilities, software developers must step up and make a difference. Yes, efficient coding is suddenly back in fashion, because developers can no longer assume that tomorrow’s hardware will be faster than today’s. But what does this mean in development terms?

For parallel processing on multiple cores it means a better understanding of the workflow in the application being developed, which is hard. In fact it makes the traveling salesman problem look like child’s play. Figuring out which parts of an application’s workload can be split apart, processed in parallel, and then safely recombined requires more than merely understanding the application’s code. It requires insight into the users’ minds. How will they use feature X? Which data streams will require the most work?

Matters are a little simpler in automated server applications, but only a little, because some processes just don’t lend themselves well to parallelization. Even splitting a video file for conversion by multiple cores or threads is harder than it sounds. Unfortunately, much of what we do in computing depends on what we just did. As in the real world, order is important. There’s no such thing as a free lunch, and parallelization can only do so much: Amdahl’s law applies.

Then there’s single-threaded performance, now forcibly constrained by the speculative execution flaws. Targeting in-order architectures like Atom would mean crafting more efficient software that’s carefully written rather than thrown together with little thought for optimization. Kolibri, for example, absolutely flies on my old Atom machine.

Your next application probably won’t be written in assembler, but you don’t have to get that close to the underlying hardware to find performance-enhancing tricks that will shave off a few more cycles here and there. It’s likely that the majority of software shipped today could be made faster—even an order of magnitude faster—with the necessary development resources.

But implementing better code efficiency is going to be a big challenge. Commercial pressure on development teams means that any drive for code elegance and optimization tends to take second place to the need for, well, completion. As project managers might put it, “We’re six weeks past the [unrealistic] deadline and the product ships on Monday, so forget about quality and just get the damned thing finished!”

Code efficiency and optimization have languished for years because other factors have seemed more important. The results are plain to see. We’ve all experienced the new version of a product requiring vastly more resources than its predecessor. Using Microsoft as an example, Word 2.0 will run happily on an old 386 at 20MHz. Sure, the latest version does a lot more, but it’s running on hardware that’s more than a thousand times more powerful, so it really should. That’s not to pick on Microsoft, because there are countless other examples in which hardware performance gains have been soaked up by code bloat and feature creep. We’ve got used to it because developers haven’t had to worry about CPU cycles, but now they do.

Spectre gives the software development industry an opportunity—and an imperative—to change tack. It’s no longer enough to rely on ever-faster processors to take care of programming bloat. Thanks to Spectre, efficient coding has become a selling point, to the board and also to the market. Good coders should be able to compensate for any Meltdown/Spectre slowdown and more besides, as long as they’re given the time and resources to do so.

Write it well, write it efficiently, write it as though the underlying hardware isn’t going to get any faster. Because, at least for the foreseeable future, it isn’t.

This story, "Why Spectre demands more elegantly coded software" was originally published by IDG Connect.

Copyright © 2018 IDG Communications, Inc.