Skip to Main Content

Is Open-Source Really Safer?

Security problems have given Windows a bad rap, but is open-source really any better? Our expert examines both sides.

May 4, 2004

The usually simmering debate about open-source versus closed-source recently boiled over, following the leak of Microsoft Windows source code on the Internet. And it boiled over here too. After I wrote a column for one of PC Magazine's sister sites about the Windows source code leak and what it might reveal about the value of closed-source code as a security technique (www.eweek.com/article2/0,4149,1527194,00.asp), 95 percent of the responses said that I didn't get the point: Open-source, being open, gets a better code review. Anyone can get the source, look at it, and find problems in it.

Inherent in this argument is the assumption that closed-source projects don't get code reviews, or at best they get inferior ones. In fact, there's no reason to believe that closed-source companies can't do good reviews—and not a lot of reason to assume that open-source projects get the scrutiny people think they get. Moreover, there's no official system for reviewing open-source code for security problems.

Unquestionably, a lot of checking of open-source projects happens. Recently, however, an attempt to set up a formal organization called Sardonix to organize these reviews essentially failed when funding dried up after nobody followed through with the work.

A SecurityFocus article on the failure (www.securityfocus.com/news/7947) hints at the reason: People don't want to volunteer to do the boring, rote parts of a real security audit. Instead, people want to find scary vulnerabilities and exploits, then bask in the glory of having found them.

On the other hand, Microsoft pays people to do code reviews, and the reviewers' evaluations and compensation depend on how well the reviews are done. According to Michael Howard, senior program manager in Microsoft's security business and technology unit, if a vulnerability is found in code someone wrote or reviewed, it will affect the responsible person's subsequent performance evaluation.

Microsoft is not the only one that reviews Microsoft products. Howard says that an extensive outside review of Microsoft Windows XP SP2 is currently underway. No doubt many people consider Microsoft either lazy or stupid in terms of security, and we all wish the company had gotten better at it faster. From what Howard says, though, Microsoft seems serious about security and capable of doing it right.

Yet serious problems persist in Microsoft products, just as they persist in open-source products. The reason is not that nobody cares. It's that it's hard to write completely secure software.

The one vulnerability resulting from the leaked source as of this writing—an integer overflow bug—illustrates the problem. The code that was leaked dates from about 3.5 years ago, when few people if any were aware of integer overflows as a potential security problem. A code review considered good by the standards of the time could have easily missed it.

Microsoft's position is that the vulnerability was found and fixed in Internet Explorer 6. And it is completely plausible that a later review, with an awareness of integer overflows and their implications, would have uncovered the problem.

On the other hand, the "OpenSSL ASN.1 parser insecure memory deallocation" bug (www.kb.cert.org/vuls/id/935264) was very similar to the recent Windows vulnerability, related to the same ASN.1 standard. But that bug got little publicity by comparison, even though pretty much every open-source operating system uses that standard.

Every version of OpenSSL up to that point was vulnerable, which means the weakness had slipped through for years. How could this have happened? Simple: It's hard to find such things.

Wouldn't it be great if the relationship between source code and security were as simple as some people think?


Larry J. Seltzer is the editor of eWEEK's online Security Center (http://security.eweek.com).