These days, it’s natural for executives charged with operating and protecting enterprise IT infrastructure to feel a little overwhelmed. Millions more employees are now working from home, testing corporate VPNs and software stacks in unprecedented ways. Meanwhile, adversaries are leaning into this moment, hoping to take advantage of chaos to catch enterprises unawares.
But determining where and how your infrastructure is vulnerable–and to what degree–can’t be a guessing game. You need data to drive your decisions on what to remediate when.
Vulnerability scanners tell us about vulns affecting specific assets or applications. But we have lacked an understanding of the relative risk that running specific platforms can introduce to an organization. Which platforms are responsible for the greatest volume of high risk vulnerabilities? And what platforms make it easiest for their customers to remediate or eliminate security risk in a high velocity manner?
Understanding where your platforms fall in this continuum can help you better decide how to prioritize asset-based risks in your particular environment.
Now there’s research aimed at providing those insights. The fifth volume of our popular Prioritization to Prediction (P2P) series, Prioritization to Prediction, Volume 5: In Search of Assets at Risk, quantifies, for the first time, the comparative risk surface of using assets based on various platforms. Prepared by the Cyentia Institute, it presents analysis based on vulnerability data culled from more than 9 million active assets across nearly 450 organizations.
Focusing on assets makes sense, after all, because understanding risk within an asset centric model gives you a more complete picture. The data set from this report was made up of organizations managing a wide range of assets–from hundreds to millions. But however large your asset population, it helps to know which among them pose the greatest risk. What this research reveals is that the answer depends on many factors, from the density of high risk vulnerabilities on a platform to the fix rate we’re seeing in the wild.
Here’s a sampling of insights plucked from the pages of the latest P2P report.
For Microsoft shops, there’s bad news and good news.
Overall, half of the firms represented in this data have an asset mix consisting of at least 85% Windows-based systems. And since Windows platforms typically have 119 vulnerabilities detected in any given month–four times more than MacOS, the next highest on the list–this could translate to an increased potential exposure compared to enterprises that rely more on other platforms.
But the good news is that Microsoft’s long-established habit of automating and pushing patches for known vulnerabilities means that Windows vulns are, on the whole, dealt with more quickly than other platforms. For instance, the half-life of vulnerabilities in a Windows system is 36 days. But for network appliances, the half-life is 369 days.
This indicates a few things, but one is clear: Vendors and platform providers are very influential in determining how much of a security risk their products pose to your infrastructure.
It’s easier to fix native software.
Enterprises stomp out 68% of Microsoft bugs on Windows-based assets in the first month, while only 30% of non-MS vulnerabilities on those same assets are fixed in that timeframe. This is likely because Microsoft has made great strides with its Patch Tuesday cadence and automating the remediation process. But it also underscores the risk that bloatware represents to your security posture and attack surface.
Simply because an asset has fewer vulnerabilities doesn’t mean it poses less of a risk.
This simplified graphic from our latest P2P report provides a kind of “heat map” view of how each platform performs across seven key VM metrics. White is lower, red is higher. So when looking at Vulnerability Density, Microsoft platforms turn up a lot of vulns compared to, say, Linux/Unix platforms. But look at how those platforms compare in metrics such as Vulnerability Half-Life or Fix Rate. For IT and Security teams, this information poses a question: What’s more important to your particular organization and risk reduction strategy? Choosing a platform simply because it has fewer vulns may, in many cases, be shortsighted and unwise.
No matter what asset platforms you have, risk-based vulnerability management is a must.
70% of Windows systems, 40% of Linux/Unix systems, and 30% of network appliances have at least one open vulnerability with known exploits. That means opportunities abound for attackers who need just one foothold to infiltrate your network. It also means that by focusing on remediating the vulns that pose the highest risk to your enterprise, you’ll do a better and more efficient job of protecting those assets and the data that resides on them.
Vulnerability management seems simple on the outside: “Scan the vulnerabilities, fix the vulnerabilities.” But anyone actually working in a VM program knows that description is a laughable oversimplification. The process of finding and fixing vulnerabilities varies widely based on the type of asset; desktops have a much different support structure than network appliances, Linux servers are managed differently than Windows servers, and so on.
That’s why studying the relative risk of various asset platforms is so helpful. And these days, we can all use a little extra help from our friends—and the research.
Download your copy of Prioritization to Prediction, Volume 5: In Search of Assets at Risk today. You can also join me for a discussion of the results with Cyentia’s Jay Jacobs at 11am Pacific/2pm Eastern or on-demand if you miss the live event
The post For CISOs Trying to Reduce Risk, New Research Reveals the Value of Focusing on Assets appeared first on Kenna Security.