One of my favorite ways to eat Oreo cookies is to twist the two halves apart, carefully set the filling aside, eat both chocolate halves, and then slowly enjoy the indulgent filling. Without milk, this is by far the best way to fully indulge in both parts of the cookie. But with a glass of milk the implementation changes. Not only is it harder to dunk the separate pieces, but the actual taste of the whole cookie soaked in milk seems much better than the sum of its parts.
In many organizations, Development and Security are two parts of a cookie. Individually they’ve both incredibly important, but too many miss the incredible combined experience. Often both sides hold fast to a false sense of superiority that would’ve made Dr. Seuss’ Sneetches proud. In fact it’s very common for us to witness clashes between the two teams when it comes to penetration tests or discussing flaws.
When I was first introduced to information security, someone told me that developers focus on finding one way to make something work while security folks focus on finding all the possible ways that could break it. I don’t believe that’s fair. At the end of the day I think we’re all focused on making things better; we just come at it from different perspectives. Developers tend to focus on improving the experience of legitimate users while Security has the goal of preventing the experience of illegitimate users. Two parts of the same cookie you might say.
A plethora of blog posts and forum rants have since come out explaining and justifying the actions of each party to this exciting debacle. Like any good philosophical flame war, both sides of the debates have good points. But there is one angle that hasn’t been well covered. This whole situation was predicated on the accepted norm of using third party code, delivered via package managers, within commercial production applications.
Follow along with me for just a moment. This young man, Azer Koçulu, published open source code that was not only used by Fortune 500 companies, but was hosted by a third party package manager so that those commercial projects depended on the existence and maintenance of the hosted packages.
In this particular case, Mr. Koçulu simply unpublished the modules. But what if instead he had updated his modules to contain malicious code? What if he used something like the Browser Exploitation Framework or something worse that injected actual malware attacks into the pages serving his code? Such changes surely would’ve been detected quickly, but out of 2.5 million downloads in one month, how many of those sites would’ve been pushed to production or used in development before it was discovered? If combined with a targeted zero-day exploit, imagine the value of a bot-net that consists of developers and clients of some of the largest websites in the world.
Traditionally information security has been boiled down to a triad of key concepts: confidentiality, integrity, and availability. Though that simplistic breakdown is somewhat antiquated, it still serves the purpose of providing a reasonable understanding of the goals of security. Infosec professionals are focused on improving these key areas rather than just saying “No” for the sake of some miserly power-trip. Unfortunately we don’t always do a good job of communicating in those terms.
This missing left-pad incident has demonstrated a situation in which developers across the world are suddenly reconsidering the impact to availability by depending on a third-party package manager like NPM for production code. The topics prompted by this even won’t go away anytime soon. However this experience should also cause us to question the impact on our systems’ integrity. If we allow third party code to be updated and integrated into an application without re-vetting the updates, how can we trust the integrity of the application or its data? (spoiler: we can’t.)
The same could be said for app stores such as Google’s Play and Apple’s App Store. In each case, some degree of security review may take place, but there’s no way they can accurately review every line of each application for security issues. Many of us may be mindful of what applications we install, but we just assume that application updates are supposed to be safe.
Even if we had reason to trust certain developers or modules, we have no way to know that each update actually comes from the original developer. The cyber attack division of a nation-state could easily look for popular open source modules or applications that can be purchased from the owners. In some cases overworked owners may even just handover the reigns to a project if someone appears genuinely interested in seeing it maintained. Over time a very powerful arsenal could be built without anyone ever realizing it. When necessary, those packages could be weaponized and targeted and take advantage of our regimented patching schedules. By the time anyone realized what happened, it would be too late.
I’ll take off the conspiracy theory hat for a bit. I think it’s safe to assume that most developers are not the unknowing pawns of some malicious nation-state. But it’s important to realize that most security vulnerabilities are found by assuming the worst-case rather than the best-case.
So what do we do about it? To a large degree, there’s not much we can do. Our entire system has been built, literally from the hardware up, on the idea of trusting others. That’s bad enough for hardware, but when we move to software and its collaborative nature it becomes ridiculously unfeasible to even consider any sort of standardized security testing for every application patch or update, especially in the open source world.
For application updates from 3rd packages, I think we just have to recognize the threat and prepare appropriately. For example it should be assumed that a sufficiently devoted adversary is going to compromise a system. While we should take reasonable steps to prevent that from happening, our goal should be to monitor, detect, and respond to the intrusion before serious damage can be done. Building those probabilities into our threat models provides a more practical estimate of business risk assessments.
I don’t see third-party package managers, or the associated risks, going away anytime soon. But we can all do a better job of understanding where our packages come from and who were trusting.
Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at firstname.lastname@example.org, on Twitter @sweaney, or visit the Secure Ideas – ProfessionallyEvil site for services provided.