Let’s talk WCry. Why was it so bad, and what could have been done?

Most Incident Responders on Friday

So, Friday May 12th, the world got a wakeup call in the form of a ransomware attack that hit a bunch of organizations, including the British National Health Service and Telephonica, a major telecom/ISP in Spain. Overall, it hit nearly a quarter million computers in almost 100 countries in just a couple of short days. I’m not going in to detail here as there are a ton of articles detailing things already. I do want to focus on why this hit so hard, and what could have been done to limit the massive damage that occurred so quickly.

Before I go any further, I want to give mad props to the security researchers that triggered the “kill switch” which, while not completely stopping the attack, will do a great deal to limit the damage in the near future.

We know there are variants without the “kill switch” option, and it doesn’t stop everything, but they have done a huge service to the world by discovering and slowing the current spread. Thanks!

Background

To understand why this was so bad, we need to understand a little bit about the threat. This was version 2 of a malware called “WannaCry” or “WCry”. Version 1 was spotted early in the year, but didn’t make much a splash. Obviously v2 was a whole new bag of worms. What made version 2 so bad was that it leveraged a somewhat recent vulnerability in the Microsoft SMB service (the service used to browse/copy/list/etc. files and folders on a network). This vulnerability was recently made public when the group called the “Shadow Brokers” released a bunch of stolen NSA exploits. The one leveraged in this attack was called “EternalBlue“. Because of the severity of the vulnerability, Microsoft offered a patch pretty quickly in the form of MS17-010 on March 14th. 

Why did it spread so much, so fast?

So, the vulnerability was known and Microsoft had released a patch to deal with it almost 2 months earlier, why then did it spread so fast? There are a few reasons for this

Systems were not patched – This exploded so quickly primarily because a lot of systems had not been patched. While a lot of security/IT folks got a rude wake up call related to their patch management processes, let’s put the pitchforks and torches down for a moment and look at why. First, patching is dangerous. Yep, you heard me right, applying patches is a dangerous proposition in the production world. It’s sadly too common that the application of patches causes system outages, instability and much wailing and gnashing of teeth. For this reason, patches are often applied carefully and only after extensive testing, especially in environments that run older software in critical roles. This can take a while to complete.

I can tell you first hand that applying patches notches the pucker factor up by a factor of at least 10. While this is no excuse not to patch, it is a driving factor in why so many were still vulnerable. In addition, many organizations still run older versions of Windows, some of which are unsupported now. In those cases the patches weren’t even available (although Microsoft has created patches for many of them back to Windows XP of them due to how bad this outbreak was)

Networks were flat – Another major factor, and something I harp on constantly when I speak, is that a lot of networks were segmented well. In a well designed network, only computers that REALLY need to communicate between each other are allowed to, and only through communications that are necessary. There is no reason a receptionist in a company should be able to reach a login screen on a production database server. No reason. Ever!

Far too often, networks are designed without taking this in to consideration. A lot of focus is placed on securing network perimeters and the internal structure is ignored. If you have a well segmented network, many attacks can have the damage greatly minimized because the malware or hacker cannot get to every asset on the network. It’s much better to have 2 machines infected than 2000. Think about it.

Users clicked in emails – Yep, this appears to have started with phishing attacks. This in turn infected unpatched machines (see above) and allowed the ransomware to spread across the networks (also see above) through the EternalBlue exploit. This is so common as to be comical. If organizations do not take security awareness training seriously, this is where we end up far too often. You can have as many bars on the windows as you like, but if you open the front door and invite them in, it all means nothing!

This kills me because of all of the protections that could be put in place, this is one of the easiest things to do, caries a huge ROI and is the most cost-effective and risk-free approach to stopping something like this from getting in to your organization. Think of this way, the user is the last line of defense. After the user clicks on the email, everything else is reactive from that point on. Antivirus/endpoint protection can try to stop it, patching can eliminate the ability of the malware to infect machines (but they are still being attacked) or hackers can be moving around your network. The user is the pivotal point when defending your network.

So what now?

In the sort-term, if you have not patched your systems, do it NOW! In addition, watch your DNS for queries to hxxp://www[.]iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea[.]com, the “kill switch” domain for the virus, check your backups ASAP and finally, TRAIN THOSE USERS NOT TO CLICK ON PHISHING EMAILS! If you need help with this last step, let me know, I help you there.

Long-term, put some focus on security 101 things in your org, to include patching schedules, segmentation, principle of least privilege and especially your backup processes. You would also be wise to really look at your organizations security culture and put some effort in to making it as effective as possible.

If you have any stories or comments you want to share, please do it below.


Erich Kron is the Security Awareness Advocate at KnowBe4, and has over 20 years’ experience in the medical, aerospace manufacturing and defense fields. He is the former security manager for the US Army 2nd Regional Cyber Center-Western Hemisphere.

2 thoughts to “Let’s talk WCry. Why was it so bad, and what could have been done?”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.