Are We Learning Lessons From Wanna Cry? I Sure Hope So

Over the last month or so I have been on a whirlwind tour of events and webinars. It’s been a bit crazy, but never so much as the day I was in Detroit for the Converge conference. I was there to speak about ransomware. My talk started at 3pm, the date was May 12th. May 12th was the day the world caught on fire (OK, maybe just a tiny bit dramatic there…). This was the day Wanna Cry (a.k.a. WCry) shook the security world.

I first heard about this while in the speaker room checking emails and such. It started with trickles and quickly turned in to a torrent of stories, warnings and opinions on what was happening. Whenever something like this happens in the world, the first few hours are always full of a mix of facts, opinions, facts presented as opinions, misreported facts and complete fabrications. I try very hard not to repeat misinformation even if it means not being the first to make a post or tweet about it. In this case, knowing that I had a ransomware presentation happening a few hours after the most widespread/well-known ransomware attack in recent history, I had to have the facts right.

A very cool thing happened then. A few of us were in the speaker room and started sharing information we each had with each other. Some folks were on the phone and some were online, but we just organically started sharing info with each other. It’s hard to describe how good this feels to folks that aren’t a part of a culture like this. In this case, perfect strangers just started helping each other as everyone was trying to make heads or tails of the facts and information being presented. This is why I love infosec professionals so much. We essentially fell in to our incident response roles without prodding, without reservation and without ego.

We quickly sorted the wheat from the chaff and determined the most reliable or likely facts and were able to present those to others that were dealing with the issue. It was nothing short of fantastic.

I put as much relevant information in to my presentation, knowing that incident responders would be in the audience and be closely monitoring the situation. Something I noticed as I was doing this was that most of the things I have been preaching for the last year or so were more relevant than ever. Defense against this latest threat was essentially nothing new, so I didn’t have to change a thing on this slide.  These are my key bullets on preparing for a ransomware attack from any number of presentations over the last year:

  • Train Your Users – This is our number one suggestion because it works. An untrained staff is an incident waiting to happen. Most technical solutions are reactive and respond after an attack. It is important to have them to minimize the damage, but we prefer to prevent the attack
  • Have Weapons-Grade Backups – Backups do no good if they are encrypted by the ransomware, so they have to be isolated from the network
  • Segment the Network – Marketing computers rarely need to have network access to the SQL servers or accounting systems
  • Principle of Least Privilege – Not everyone should be an administrator. The less access users have, the less malware can spread
  • Monitor the Network – Use a system like a SIEM or IDS to alert on malicious network behavior
  • Keep Up With Patches – OS and applications need to be kept patched

In this case, we have discovered that the attacks were not necessarily spread via phishing, but let’s be perfectly clear, this was a significant exception to the rule so the first bullet still stands strong. We know that the patch was available for months prior to the attack. I can forgive a few weeks or maybe a month after a patch for an OS vulnerability labeled, “Critical” is released. I have a much harder time with 2+ months. Yes, I know some folks run an older OS that did not have a patch (e.g. XP), but in all honesty, those machines should not be on the network any more and if they are, they should have a ton of security controls in place to essentially isolate if from the rest of the network. This is 2017 folks, having a vulnerable OS available on the production network is just inexcusable.

Did we learn nothing about the importance of network segmentation from the Target breach? No, it’s not the same type of attack, but we should have learned that if a group of devices don’t NEED to talk to each other, they shouldn’t! Same theory here. Had more folks had their networks better segmented, the damage would have been much more contained. In the Army, when a new system went online, we had to define the ports that needed to be open in order to operate that system. Rules were pretty simple, list the ports and protocols, don’t even try to sneak in an any-to-any rule. We could have one-to-many or many-to-one, but each line had to have some specific ports on it. This was non-negotiable. This was a pain in the butt. This was a great thing.

I hope this was a wake up call for organizations and security professionals across the globe. We need to do a better job remediating or mitigating the risks. Yes, it’s more work than just accepting it, but how many risk acceptances for outdated operating systems or patch deferrals do you think were in place in NHS as they buckled under the load of WCry? Remember, accepting the risk is not the same as correcting it. With that, I leave you with this fantastic video by Host Unknown.

If you disagree or have something to add, post the comments below

 


Erich Kron is the Security Awareness Advocate at KnowBe4, and has over 20 years’ experience in the medical, aerospace manufacturing and defense fields. He is the former security manager for the US Army 2nd Regional Cyber Center-Western Hemisphere.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.