Open Source Vs. Proprietary Software
Every day that goes by SolarWinds proprietary software Orion network monitoring product supply chain security failure gets bigger and bigger. Microsoft, itself a victim, reports that 40 of its customers installed trojanized versions of Orion. Victims include the U.S. Department of Energy and the National Nuclear Security Administration, at least several state governments, and many others.
How bad is it? The Cybersecurity Infrastructure and Security Agency said the hacks posed a “grave risk” to US governments at all levels. That’s how bad.
What caught my attention, though, is that SolarWinds has been an anti-open source for years. Cloud-native computing, from Docker and Kubernetes to the last little program on the Cloud Native Computing Foundation’s (CNCF) Cloud Native Interactive Landscape, is open source.
Ironically, SolarWinds claimed open source software as being untrustworthy because anyone can infect it with malicious code. A Solarwinds writer claimed: security “risk is far less when it comes to proprietary software. Due to the nature of open-source software, allowing anyone to update the code, the risk of downloading malicious code is much higher. One source referred to using open-source software as “eating from a dirty fork.” When you reach in the drawer for a clean fork, you could be pulling out a dirty utensil. That analogy is right on the money.”
SolarWinds followed this up by remarking in another blog that the whole foundation of cloud-native computing — containers and container orchestration aren’t trustworthy either. Omar Rafik, SolarWinds Senior Manager of Federal Sales Engineering, wrote, “containers are designed in a way that hampers visibility” and “Visibility becomes particularly problematic when using an orchestration tools like Docker Swarm or Kubernetes to manage connections between different containers because it can be difficult to tell what is happening.”
Trust us. We already know security is a challenge in cloud-native computing. We work on locking down cloud-native computing every day.
Nonetheless, open-source is not the one that’s inherently insecure here. Proprietary software — a black box where you can never know what’s coming to pass — is now, always has been, and always will be more of a security problem.
I would no more trust anything mission critical to proprietary software than I would drive a car at night without lights or a fastened seat belt. That’s why I’m writing this on Linux Mint with LibreOffice rather than Windows and Microsoft Word. That’s why the internet, cloud-native computing, and the cloud — yes, even Microsoft Azure — use Linux and open source.
Presently, there’s nothing magical about open-source software. People who assume that a miracle happens when you use open source and are somehow perfectly safe, I’m looking at you, Equifax, to certify what they get when they don’t keep their software updated. In that case, it was Apache Struts.
In still another infamous case, missing a simple error invalidating a variable containing a length in OpenSSL led to the Heartbleed security breach. I called it open source’s most significant failure to date. I wasn’t wrong.
So, why with all that history, am I saying open-source software is inherently more secure? Because it is.
A fundamental open-source principle is that by bringing many eyeballs to programs, more errors will be caught. That doesn’t mean all mistakes are caught, just a lot more than those by a single proprietary company.
A corollary to this is Eric S. Raymond, one of the open source’s founders, who famously said, “Given enough eyeballs, all bugs are shallow.” He called it “Linus’s Law.” It worked well. Just consider the sheer number of serious Windows bugs — does a month go by without one? — compared to those of Linux.
There are many ways to find those open source mistakes. You can, of course, do it yourself. The code, after all, is open. Not sure what’s new in your software supply chain’s programs? You can use the Red Hat‘s Release Monitoringor Replogy. The nvchecker program is also useful. Or, you can look to Synopsys’s Black Duck or Sonatype Nexus Lifecycle for a third-party code analysis tool.
The Linux Foundation has also been working on armoring the open source software chain with the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open source leaders by building a broader security community. It combines efforts from the Core Infrastructure Initiative (CII), GitHub’s Open Source Security Coalition, and other open source security-savvy companies such as GitHub, GitLab, Google, IBM, Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware.
The goal of OpenSSF, according to Mark Russinovich, Microsoft Azure’s CTO is to help developers better understand the security threats that exist in the open source software ecosystem and how those threats impact specific open source projects.
To help harden open source software, the Foundation has four goals. 1) Help developers to spot security problems, 2) Provide the best security tools for open source developers, 3) Give them best practice recommendations; and 4) Create an open source software ecosystem where the time to fix a vulnerability and deploy that fix across the ecosystem is measured in minutes, not months.
In short, proprietary software companies, like SolarWinds, are still making colossal security blunders hidden from users until the damage is done. At the time, open-source programmers and their allies are continuing to make their programs ever more secure and in the open so that everyone benefits.
Steven J. Vaughan-Nichols, TheNewStack
The Linux Foundation, CNCF, GitLab, Red Hat, Sonatype, Synopsys, and VMware are sponsors of The New Stack.