본문 바로가기

Security_News/해외보안소식

Reviewing the Secunia 2013 Vulnerability Review

728x90

Reviewing the Secunia 2013 Vulnerability Review

On February 26, Secunia released their annual vulnerability report (link to report PDF) summarizing the computer security vulnerabilities they had cataloged over the 2013 calendar year. For those not familiar with their vulnerability database (VDB), we consider them a ‘specialty’ VDB rather than a ‘comprehensive’ VDB (e.g. OSVDB, X-Force). They may disagree on that point, but it is a simple matter of numbers that leads us to designate them as such. That also tends to explain why some of our conclusions and numbers are considerably different and complete than theirs.

In past years this type of blog post would not need a disclaimer, but it does now. OSVDB, while the website is mostly open to the public, is also the foundation of the VulnDB offering from our commercial partner and sponsor Risk Based Security(RBS). As such, we are now a direct competitor to Secunia, so any criticism leveled at them or their report may be biased. On the other hand, many people know that I am consistently critical of just about any vulnerability statistics published. Poor vulnerability statistics have plagued our industry for a long time. So much so that Steven Christey from CVE and I gave a presentation last year at the BlackHat briefings in Las Vegas on the topic.

One of the most important messages and take-aways from that talk is that all vulnerability statistics should be disclaimed and explained in advance. That means that a vulnerability report should start out by explaining where the data came from, applicable definitions, and the methodology of generating the statistics. This puts the subsequent statistics in context to better explain and disclaim them, as a level of bias enters any set of vulnerability statistics. Rather than follow the Secunia report in the order they publish them, I feel it is important to skip to the very end first. For that is where they finally explain their methodology to some degree, which is absolutely critical in understanding how their statistics were derived.

On page 16 (out of 20) of the report, in the Appendix “Secunia Vulnerability Tracking Process”, Secunia qualifies their methodology for counting vulnerabilities.

A vulnerability count is added to each Secunia Advisory to indicate the number of vulnerabilities covered by the Secunia Advisory. Using this count for statistical purposes is more accurate than counting CVE identifiers. Using vulnerability counts is, however, also not ideal as this is assigned per advisory. This means that one advisory may cover multiple products, but multiple advisories may also cover the same vulnerabilities in the same code-base shared across different programs and even different vendors.

First, the ‘vulnerability count’ referenced is not part of a public Secunia advisory, so their results cannot be realistically duplicated. The next few lines are important, as they invalidate the Secunia data set for making any type of real conclusion on the state of vulnerabilities. Not only can one advisory cover multiple products, multiple advisories can cover the same single vulnerability, just across different major versions. This high rate of duplicates and lack of unique identifiers make the data set too convoluted for meaningful statistics.

CVE has become a de facto industry standard used to uniquely identify vulnerabilities which have achieved wide acceptance in the security industry.

This is interesting to us because Secunia is not fully mapped to CVE historically. Meaning, there are thousands of vulnerabilities that CVE has cataloged, that Secunia has not included. CVE is a de facto industry standard, but also a drastically incomplete one. At the bare minimum, Secunia should have a 100% mapping to them and they do not. This further calls into question any statistics generated off this data set, when they knowingly ignore such a large number of vulnerabilities.

From Remote
From remote describes other vulnerabilities where the attacker is not required to have access to the system or a local network in order to exploit the vulnerability. This category covers services that are acceptable to be exposed and reachable to the Internet (e.g. HTTP, HTTPS, SMTP). It also covers client applications used on the Internet and certain vulnerabilities where it is reasonable to assume that a security conscious user can be tricked into performing certain actions.

Classification for the location of vulnerability exploitation is important as this heavily factors into criticality; either via common usage, or through scoring systems such as CVSS. In their methodology, we see that Secunia does not make a distinction between ‘remote’ and ‘context-dependent’ (or ‘user-assisted’ by some). This means that the need for user interaction is not factored into an issue and ultimately, scoring and statistics become based on network, local (adjacent) network, or local vectors.

Secunia further breaks down their classification in the appendix under “Secunia Vulnerability Criticality Classification“. However, it is important to note that their breakdown does not really jibe with any other scoring system. Looking past the flaw of using the word ‘critical’ in all five classifications, the distinction between ‘Extremely Critical’ and ‘Highly Critical’ is minor; it appears to be solely based on if Secunia is aware of exploit code existing for that issue based on their descriptions. This mindset is straight out of the mid 90s in regards to threat modeling. In today’s landscape, if details are available about a vulnerability then it is a given that a skilled attacker can either write or purchase a vulnerability for the issue within a few days, for a majority of disclosed issues. In many cases, even when details aren’t public but a patch is, that is enough to reliably reverse it and leverage it for working exploit code in a short amount of time. Finally, both of these designations still do not abstract on if user interaction is required. In each case, it may or may not be. In reality, I imagine that the difference between ‘Extremely’ and ‘Highly’ is supposed to be based on if exploits are happening in the wild at time of disclosure (i.e. zero day).

Now that we have determined their statistics cannot be reproduced, use a flawed methodology, and are based on drastically incomplete data, let’s examine their conclusions anyway!


The blog announcing the report is titled “1,208 vulnerabilities in the 50 most popular programs – 76% from third-party programs” and immediately calls into question their perspective. Reading down a bit, we find out what they mean by “third-party programs”:

“And the findings in the Secunia Vulnerability Review 2014 support that, once again, the biggest vulnerability threat to corporate and private security comes from third-party – i.e. non-Microsoft – programs.”

Unfortunately, this is not the definition of a third-party program by most in our industry. On a higher more general level, a “third-party software component” is a “is a reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform” (Wikipedia). In the world of VDBs, we frequently refer to a third-party component a ‘library‘ that is integrated into a bigger package. For example, Adobe Reader 10 which is found on many desktop computers is actually built on Adobe’s own code, but also as many as 212 other pieces of software. The notion that “non-Microsoft” software is “third-party” is very weird for lack of better words, and shows the mindset and perspective of Secunia. This completely discounts users of Apple, Linux, VMs (e.g. Oracle, VMware, Citrix), and mobile devices among others. Such a Microsoft-centric report should clearly be labeled as such, not as a general vulnerability report.

In the Top 50 programs, a total of 1,208 vulnerabilities were discovered in 2013. Third-party programs were responsible for 76% of those vulnerabilities, although these programs only account for 34% of the 50 most popular programs on private PCs. The share of Microsoft programs (including the Windows 7 operating system) in the Top 50 is a prominent 33 products – 66%. Even so, Microsoft programs are only responsible for 24% of the vulnerabilities in the Top 50 programs in 2013.

This is aiming for the most convoluted summary award apparently. I really can’t begin to describe how poorly this comes across. If you want to know the ‘Top 50 programs’, you have to read down to page 18 of the PDF and then resolve a lot of questions, some of which will be touched on below. When you read the list, and see that several ‘Microsoft’ programs actually had 0 vulnerabilities, it will call into question the “prominent 33 products” and show how the 66% is incorrectly weighted.

“However, another very important security factor is how easy it is to update Microsoft programs compared to third-party programs.” — Secunia CTO, Morten R. Stengaard.

When debunking vulnerability statistics, I tend to focus on the actual numbers. This is a case where I simply have to branch out and question how a ‘CTO’ could make this absurd statement. In one sentence, he implies that updating Microsoft is easy, while third-party programs (i.e. non-Microsoft programs per their definition) are not. Apparently Mr. Stengaard does not use Oracle Java, Adobe Flash player, Adobe Air, Adobe Reader, Mozilla Firefox, Mozilla Thunderbird, Google Chrome, Opera, or a wide range of other non-Microsoft desktop software, all of which have the same one-click patching/upgrade ability. Either Mr. Stengaard is not qualified to speak on this topic, or he is being extremely disingenuous in his characterization of non-Microsoft products to suit the needs of supporting this report and patch management business model. If he means that patching Windows is easier on an enterprise scale (e.g. via SCCM or WSUS), then that is frequently true, but such qualifications should be clear.

This is a case where using a valid and accepted definition of ‘third-party programs’ (e.g. a computing library) would make this quote more reasonable. Trying to upgrade ffmpeg, libav, or WebKit in the context of the programs that rely on them as libraries is not something that can be done by the average user. The problem is further compounded when portions of desktop software are used as a library in another program, such as AutoCad which appears in the Adobe Reader third-party license document linked above. However, these are the kinds of distinctions that any VDB should be fully aware of, and be able to disclaim and explain more readily.


Moving on to the actual ‘Secunia Vulnerability Review 2014‘ report, the very first line opens up a huge can of worms as the number is incorrect and entirely misleading. The flawed methodology used to generate the statistic cascades down into a wide variety of other incorrect conclusions.

The absolute number of vulnerabilities detected was 13,073, discovered in 2,289 products from 539 vendors.

It is clear that there are a significant amount of vulnerabilities that are being counted multiple times. While this number is generated from Secunia’s internal ‘vulnerability count’ number associated with each advisory, they miss the most obvious flaw; that many of their advisories cover the exact same vulnerability. Rather than abstract so that one advisory is updated to reflect additional products impacted, Secunia will release additional advisories. This is immediately visible in cases where a protocol is found to have a vulnerability, such as the “TLS / DTLS Protocol CBC-mode Ciphersuite Timing Analysis Plaintext Recovery Cryptanalysis Attack” (OSVDB 89848). This one vulnerability impacts any product that implements that protocol, so it is expected to be widespread. As such, that one vulnerability tracks to 175 different Secunia advisories. This is not a case where 175 different vendors coded the same vulnerability or the issue is distinct in their products. This is a case of a handful of base products (e.g. OpenSSL, GnuTLS, PolarSSL) implementing the flawed protocol, and hundreds of vendors using that software bundled as part of their own.

While that is an extreme example, the problem is certainly front-and-center due to their frequent multi-advisory coverage of the same issue. Consider that one OpenSSL vulnerability may be covered in 11 Secunia advisories. Then look at other products that are frequently used as libraries or found on multiple Linux distributions, each of which get their own advisory. Below is a quick chart showing examples of a single vulnerability in one of several products, along with the number of Secunia advisories that references that one vulnerability:

Example w/ 1 Vuln# of Secunia Adv
CVE-2013-6367 Linux Kernel15
CVE-2013-6644 Google Chrome5
CVE-2013-5609 Mozilla14
CVE-2013-4408 Samba10
CVE-2013-6415 Ruby on Rails10
CVE-2014-0368 Oracle Java27

This problem is further compounded when you consider the number of vulnerabilities in those products in 2013, where each one received multiple Secunia advisories. This table shows the products from above, and the number of unique vulnerabilities as tracked by OSVDB for that product in 2013 that had at least one associated Secunia advisory:

Software# of Vulns in product in 2013 w/ Secunia Ref
Linux Kernel131
Google Chrome267
Mozilla148
Samba10
Ruby on Rails14
Oracle Java194

It is easy to see how Secunia quickly jumped to 13,073 vulnerabilities while only issuing 3,327 advisories in 2013. If there is any doubt about vulnerability count inflation, consider these four Secunia advisories that cover the same set of vulnerabilities, each titled “WebSphere Application Server Multiple Java Vulnerabilities“. Secunia created four advisories for the same vulnerabilities simply to abstract based on the major versions affected, as seen in this table:

Secunia Advisory# of Vulns in product in 2013
56778reported in versions 8.5.0.0 through 8.5.5.1.
56852reported in versions 8.0.0.0 through 8.0.0.8.
56891reported in version 7.0.0.0 through 7.0.0.31.
56897reported in versions 6.1.0.0 through 6.1.0.47.

The internal ‘vulnerability count’ for these advisories are very likely 25, 25, 25, and 27, adding up to 102. Applied against IBM, you have 27 vulnerabilities inflated greatly and counting for 102 instead. Then consider that IBM has several hundred products that use Java, OpenSSL, and other common software. It is easy to see how Secunia could jump to erroneous conclusions:

The 32% year-on-year increase in the total number of vulnerabilities from 2012 to 2013 is mainly due to a vulnerability increase in IBM products of 442% (from 772 vulnerabilities in 2012 to 4,181 in 2013).

This unfair and potentially libelous statement was picked up by InformationWeek in an article titled “IBM Software Vulnerabilities Spiked In 2013“, which Secunia was quick to Tweet about.


The next set of statistics is convoluted on the surface, but even more confusing when you read the details and explanations for how they were derived:

Numbers – Top 50 portfolio
The number of vulnerabilities in the Top 50 portfolio was 1,208, discovered in 27 products from 7 vendors plus the most used operating system, Microsoft Windows 7.

To assess how exposed endpoints are, we analyze the types of products typically found on an endpoint. Throughout 2013, anonymous data has been gathered from scans of the millions of private computers which have the Secunia Personal Software Inspector (PSI) installed. Secunia data shows that the computer of a typical PSI user has an average of 75 programs installed on it. Naturally, there are country- and region-based variations regarding which programs are installed. Therefore, for the sake of clarity, we chose to focus on a representative portfolio of the 50 most common products found on a typical computer and the most used operating system, and analyze the state of this portfolio and operating system throughout the course of 2013. These 50 programs are comprised of 33 Microsoft programs and 17 non-Microsoft (third-party) programs.

Reading down to page 18 of the full report, you see the table listing the “Top 50″ software installed as determined by their PSI software. On the list is a wide variety of software that are either components of Windows (meaning they come installed by default, but show up in the “Programs” list e.g. Microsoft Visual C++ Redistributable) or in a few cases third-party software (e.g. Google Toolbar), many of which have 0 associated vulnerabilities. In other cases they include product driver support tools (e.g. Realtek AC 97 Update and Remove Driver Tool) or ActiveX components that are generally not installed via traditional means (e.g. comdlg32 ActiveX Control). With approximately half of the Top 50 software having vulnerabilities, and mixing different types of software components, it causes summary put forth by Secunia to be misleading. Since they include Google Chrome on the list, by their current logic, they should also include WebKit which is a third-party library wrapped into Chrome, just as they include ‘Microsoft Powerpoint Viewer’ (33) which is a component of ‘Microsoft Powerpoint’ (14) and does not install separately.

Perhaps the most disturbing thing about this Top 50 summary is that Secunia only counts 7 vendors in their list. Reading through the list carefully, you see that there are actually 10 vendors represented: Microsoft, Adobe, Oracle, Mozilla, Google, Realtek, Apple, Piriform (CCleaner), VideoLAN, and Flexera (InstallShield). This seriously calls into question any conclusions put forth by Secunia regarding their Top 50 list and challenges their convoluted and irreproducible methodology.


Rather than offer a rebuttal line by line for the rest of the report and blog, we’ll just look at some of the included statistics that are questionable, wrong, or just further highlight that Secunia has missed some vulnerabilities.

In 2013, 727 vulnerabilities were discovered in the 5 most popular browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Opera and Safari.

By our count, there were at least 756 vulnerabilities in these browsers: Google Chrome (295), Mozilla Firefox (155), Internet Explorer (138), Opera (9), Apple Safari (8 on desktop, 4 on mobile), and WebKit (component of Chrome and Safari, 147). The count in Opera is likely very low though. In July 2013, Opera issued the first browser based on Blink, so it’s very likely that it has been affected by the vast majority of the Blink vulnerability fixes by Google. However, Opera is not very good at clearly reporting vulnerabilities, so this very likely accounts for the very low count that both we and Secunia have; something they should clearly have disclaimed.

In 2013, 70 vulnerabilities were discovered in the 5 most popular PDF readers: Adobe Reader, Foxit Reader, PDF-XChange Viewer, Sumatra PDF and Nitro PDF Reader.

By our count, there were at least 76 vulnerabilities in these PDF readers: Adobe Reader (69), Foxit (2), PDF-XChange (1), Sumatra (0), and Nitro (4).

The actual vulnerability count in Microsoft programs was 192 in 2013; 128.6% higher than in 2012.

Based on our data, there were 363 vulnerabilities in Microsoft software in 2013, not 192. This is up from 207 in 2012, giving us a 175.3% increase.

As in 2012, not many zero-day vulnerabilities were identified in 2013: 10 in total in the Top 50 software portfolio, and 14 in All products.
A zero-day vulnerability is a vulnerability that is actively exploited by hackers before it is publicly known, and before the vendor has published a patch for it.

By that definition, which we share, we tracked 72 vulnerabilities that were “discovered in the wild” in 2013. To be fair, our number is considerably higher because we actually track mobile vulnerabilities, something Secunia typically ignores. More curious is that based on a cursory search, we find 17 of their advisories that qualify as 0-day by their definition, suggesting they do not have a method for accurately counting them:SA51820 (1), SA52064 (1), SA52116 (2), SA52196 (2), SA52374 (2), SA52451 (1),SA53314 (1), SA54060 (1), SA54274 (1), SA54884 (2), SA55584 (1), SA55611 (1), andSA55809 (1).

Find out how quickly software vendors issue fixes – so-called patches – when vulnerabilities are discovered in All products.

This comes from their “Time to Patch for all products” summary page. This statement seems pretty clear; How fast do vendors issue fixes when vulnerabilities are discovered? However, Secunia does not track that specifically! The more appropriate question that can be answered by their data is “When are patches available at or after the time of public disclosure?” These are two very different metrics. The information on this page is generated using PSI/CSI statistics. So if a vulnerability is disclosed and a fix is already available at that time, it counts as within 24 hours. It doesn’t factor in that the vendor may have spent months fixing the issue before disclosure and patch.


In conclusion, while we appreciate companies sharing vulnerability intelligence, the Secunia 2013 vulnerability report is ultimately fluff that provides no benefit to organizations. The flawed methodology and inability for them to parse their own data means that the conclusions cannot be relied upon for making business decisions. When generating vulnerability statistics, a wide variety of bias will always be present. It is absolutely critical that your vulnerability aggregation methodology be clearly explained so that results are qualified and have more meaning.

728x90