Waah! WannaCry shifts the blame game into high gear

Every security crisis presents the opportunity to point fingers, but that's just wasted energy. The criminals are at fault—and we need to work together to stop them

More and more, information security seems to be about finding someone to blame for the latest crisis. The blame game was in full gear within hours of the WannaCry ransomware outbreak, and even after a few days there’s still a lot of anger to go around. People want heads to roll, but that won’t help contain the current damage or spur improvements to minimize the impact of future attacks.

The WannaCry ransomware successfully infected so many machines because it crafted the malware to use multiple infection vectors, including traditional phishing, remote desktop protocol (RDP), and a vulnerability in the SMB protocol. It took advantage of the fact that people don’t always recognize phishing links, and that many systems aren’t running the latest versions of applications or the operating system.

Those are the facts. But arguing that if one factor or another hadn’t been present then this outbreak would never have happened shows a complete misunderstanding or willful disregard of the complexities of IT, software development, and the technology ecosystem.

Stop with the victim-blaming 

Blaming the victim is a common tactic. Right now scorn is being heaped on individual users for not having applied Windows updates, for using older and no-longer-supported operating systems such as Windows Vista, or for not recognizing phishing attacks.

While it’s important to teach users to recognize scams and to be careful about their online activities, no amount of training will ever be sufficient to keep up with the increasing sophistication of phishing. Likewise, users still have trouble seeing why they can’t stick with the software they’re comfortable with if it still works. It’s an awareness challenge, but yelling at them for running old stuff won’t make things better.

Software is going to have bugs

As always, you can hear the grumbling about software being infested with bugs and how Microsoft should not release software containing vulnerabilities. But the reality of software development dictates that the number of vulnerabilities in the code can only be reduced—bug-free software is just a lovely fantasy.

Yes, way back when, Microsoft and other tech companies failed to focus on security during the development lifecycle, but those days are gone. Now vendors focus on hardening software and patching on a regular basis. Microsoft patched the bugs in this case as soon as it learned about them, which is all it could do. It even went the extra mile to release patches for no-longer-supported systems, even though end-of-life policies dictate that older systems don’t receive updates.

Spies will spy

With WannaCry, the NSA gets its due once again. Like clockwork, critics shout that the agency should not be stockpiling vulnerabilities and creating its own exploits, but rather reporting the flaws to vendors so that they can be patched. Even Microsoft president and chief legal officer Brad Smith lashed out in a blog post: “This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem.”

Just as bug-free software is a fantasy, demanding that spies refrain from creating spying tools is going to fall on deaf ears.

Setting aside the question of whether the NSA should be doing its own bug-hunting and exploit development, plenty of people argue that the NSA was negligent for letting the tools be stolen. But security researchers believe the Shadowbrokers got their hands on this cache through an insider who had access to the tools. At this point, it feels like a stretch to blame the NSA for the theft.

Perhaps the NSA needs better vetting on who can use the tools in the first place, but malicious insider activity is not the same as negligence. It appears that the NSA notified Microsoft as soon as the leak of the tools seemed likely.

But the bottom line is that this attack code was going to happen. Even if the NSA had never created EternalBlue and other tools, it’s very likely that someone would have created the attack code as soon as Microsoft was told about the bug. Exploit and malware writers reverse-engineer software patches to figure out the underlying flaw and then develop their own exploit to trigger the bug. That’s the reality of exploit development.

When Microsoft rates a vulnerability as “critical,” it believes that criminals would be able to develop a working exploit within 30 days. WannaCry came about eight weeks after the flaws were patched, and appears to be based off the exploit code from penetration testing tool Metasploit and not the actual NSA implant. The question of how much longer before a working exploit would have been available in underground circles is academic.

IT and security are doing their best

Here comes everyone’s favorite scapegoat: IT, eternally shamed for not patching systems, using older systems, or not prioritizing security over everything else. The tendency to assume that IT is negligent or incompetent reflects a profound misunderstanding of the kind of challenges IT faces.

IT can’t upgrade older systems if there is a custom application purchased years ago or a critical software application that requires the older OS—and the vendor no longer exists to even update the software. Organizations with serious cost constraints, such as government or non-profit organizations, tend to be particularly vulnerable.

Still want to blame IT for not patching? Well, it could be that a new CTO just came on board and realized there is no documentation or understanding of the current network architecture. There is no way to roll out patches on vulnerable systems “immediately” until the CTO has completed inventory. Or perhaps the critical system is already under maintenance for a different critical patch—perhaps an Apache web server, Oracle, or even for an enterprise application—and it’s highly irresponsible to be rolling out multiple updates at once.

IT is already under a lot of pressure due to constraints on time, money, and manpower. Accusing IT of falling down on the job can be wildly unfair, particularly if senior management never made the funds available for upgrades, hired more IT staff, or invested in “better” technology. 

Does the buck stop with security professionals and security vendors? After all, despite the investments organizations have made in security technology and defenses, WannaCry bypassed controls and successfully infected users. It doesn’t make sense to complain that white hat bug hunters should have found and reported the flaws earlier.

Work together—the bad guys already do

Nothing is gained from all the finger wagging and sanctimony, and it just makes it harder to react to the crisis during an attack as well as to make changes in order to prevent being the victim the next time.

Resilience is the name of the game, and it requires a collaborative approach. Hardening the network and segmenting different parts to make it harder for malware and attackers to move laterally requires cooperation between IT, end-users, and business stakeholders. Understanding what parts of the infrastructure need upgrades and what kind of expenses that would entail, either in terms of new hardware, user training, or even new application development, means creating an actual plan and roadmap to balance competing schedules and deadlines.

Regularly backing up systems and making sure the backups are ready to go is part of business continuity and not traditionally part of security, which goes to show that not all solutions require some kind of a security answer.

Ultimately, we need to assign blame where it belongs: to those who created WannaCry and the criminals that are using ransomware to bilk victims out of money. And to defeat them, we need to pull together and collaborate on finding real solutions.

Copyright © 2017 IDG Communications, Inc.