Why you can’t buy yourself out of being the next Equifax
We all now know about the Equifax hack. 143 million Americans private information is now in the hands of some darkweb douchebags. According to Equifax the hack exploited a 2 month old vulnerability (CVE-2017–5638) in Apache Struts2 that allows hackers to run code via a specially crafted HTTP header. This particular vulnerability like many has a knock on affect for any applications built on top of Struts2. For example there are 21 separate Cisco products affected by this one vulnerability. And to make it all worse, patching Struts is not as simple as doing a pip install — upgrade or yum upgrade. It takes work rebuilding any existing applications with the updated versions of Struts. This in part explains why Equifax and others have been lazy to fix the problem even though it was labeled as a critical issue when it was announced in early March. There are a couple workarounds but it doesn’t appear that Equifax applied even those. Equifax was hacked in late May and didn’t announce it until early September. Exploit tests have been available on Github for months and exploits were seen in the wild within two days of the original vulnerability announcement. Hell, even nmap can find it! How Equifax didn’t at least know about the vulnerability is still a mystery. Given their history of being lackadaisical with their cyber security I guess its no surprise really.
The problem arises when companies think they can just buy their way out of this problem. Equifax hired Mandiant who ain’t cheap. And this is a trend with many companies both new and old. They purchase Trend Micro Deep Security or Symantec’s Endpoint Protection. They buy some fancy Palo Alto firewalls and/or hire a pen testing company. What they DON’T do is address the internal culture that let’s it be okay to not patch servers, to run old application servers (Websphere anyone?) and to not modernize the way they build their applications. I’ve seen compromised database servers left in production because there was no failover and taking it offline meant downtime. The perception (I’m guessing here) is that its easier to do nothing and just hide when the village is attacked hoping the vikings don’t find you.
I’ve seen both ends of this thread in my work. At one extreme are the tech hipster companies moving so fast that they don’t take the time to build security in or assume their cloud provider does it all for them. At the other end are those big slow corporations that see change as equal to risk and hide behind 3 layers of edge gear.
So managers and C suite peeps, this is for you: You can’t buy some individual piece that’s going to fix this kind of fundamental problem. No matter who tells you that, it’s not true. If its a vendor, they are trying to make a quick buck. If its a technical person on your team, they are just scrambling, looking for quick fixes or to buy some time. Building a InfoSec team to run Nessus scans is NOT the fix either. You need to face the fact that your company is possibly (probably?) not capable of evolving. Or at the very least that your company is openly change adverse. Those are hard things to digest, right? But they’re even harder to explain in the wake of a hack. And believe me, I know from person experience that fixing a broken company culture is not a simple thing to do.
So what CAN you do? How do you prepare yourself for a future where you are under constant barrage? Here are some quick thoughts in no particular order:
- Understand what immutable infrastructure is, and implement it as much as possible. If your information systems are built correctly, you can be back up and running while your internal teams figure out what happened. This is powerful.
- Monitor everything and pay people to constantly make your monitoring better. At the very least having a good logging and monitoring policy allows you to go back and find out what happened post incident. (this has the added benefit of also addressing PCI and SOC requirements)
- Security as Code. Automation, automation, auto-freaking-mation. If you don’t know what this means, then hire someone that does and implement it.
- Commit to the principle of least privilege. Everywhere. And yes, that means a lot of privilege escalation tickets.
- Get your teams working together to address these issues. Shake things up! Go DevOps. Go DevSecOps. Do something to challenge the inertia of the status quo.
- For those legacy systems that don’t fit the new immutable, transient model (which is most, right?) then automate backups and patching, implement security tools and start figuring out how to sunset it.
- The “cloud” does not mean that someone else is doing your backups. If your team says “we do daily snapshots” or “RDS does it for you” ask them what’s your restoration policy and when did they test it last.
- Hire people that understand how systems and applications work. This means you need managers that know how to hire people that understand how systems and applications work.
- Create an InfoSec presence and make sure they can work with your development and ops teams to address security holistically from the ground up. If your InfoSec teams and developers can’t have a beer together there is something wrong.
- Understand that moving fast and breaking things is great but handing your customers information to bad guys because you were moving fast and breaking things is a quick way to go out of business.
In closing, security is best addressed when its built into the foundation of your business and understood to be a fundamental part of your success. If you can’t say that and mean it, you got problems.
I wrote this quickly and under the influence of a strong quad mocha. If you take issue with anything I said here or just want to reach out to me I’m at firstname.lastname@example.org