There's a false sense of security around open source code, according to Trustwave researchers Brandon Myers and Assi Barak.

In a presentation on the RSA Conference expo floor Tuesday, they gave a few examples of vulnerabilities they'd found last year using Trustwave's vulnerability scanner. One was in certain versions of Magmi, an open source database utility underlying the Magento e-commerce platform. Another was in RubyGems, a library of packages for the Ruby programming language.

Their deeper point was that open source code is prone to vulnerabilities just like any other code.

"Open source, while we love it — it's awesome — is not as secure as people assume," said Myers, a Trustwave senior security researcher.

The problem lies in the way people obtain open source code. It's often grabbed in a hurry, and if it came from a trusted registry, it must be OK, right?

The analogy that comes to mind is Wi-Fi. People will latch onto any free Wi-Fi signal with a friendly name. Could the name be spoofed? Sure. But if you're in a hurry...

The last two years have seen some famous examples of vulnerabilities in open source code. Heartbleed, found in OpenSSL, is the most famous example. And in January, a hole was discovered in OpenSSH.

Both are widely used packages, which shows that just because everybody gets to see the source code, that doesn't mean every vulnerability has been found.

Magmi and the Version Problem

Nor does it mean that the version of code you're using is the safe one. As with commercial code, open source code has old versions lying around that don't include the latest security patches.

That's what happened with the Magmi exploit, which had a hole that could be exploited to steal passwords or encryption keys. Someone had fixed the problem in the version of Magmi that was on Github. But that fix wasn't really announced to the world — so when developers grabbed an older Magmi version off of Sourceforge, they brought along a security vulnerability as well.

"This happens a lot with mirrors in Github as well," Myers said.

Trustwave tried contacting Magmi developers about the situation, and that raised a separate and very old concern about open source code: Who answers the phone? Some open source projects do have organized response teams that can address new problems quickly. Many do not. Trustwave had a hard time getting through to Magmi's developers, Myers said.

Worse, it's possible your open source code was developed by exactly one person, and that person has moved on.

"The project might be for my school, and I don't want to touch it or see it for the rest of my life," said Barak, Trustwave's lead security researcher.

Hijacking DNS

The RubyGems exploit was more sinister. It opened the possibility of a DNS hijack — that is, redirecting Web traffic without the user's knowledge. So, when someone asked for a "gem" (Ruby's term for a package of code) off the RubyGems site, it would be possible for an attacker to answer the request with some other gem from some other site.

Myers and Barak demonstrated the exploit by asking for a gem called c7decrypt and getting something called c8decrypt instead. Users might not notice the difference. More importantly, the practices of DevOps and continuous integration involve grabbing lots of code quickly, sometimes in an automated fashion. "There's not going to even be a user looking at that screen saying, 'We wanted 7 and we got 8,' " Barak said.

The problem stemmed from a lack of verification — the user couldn't check that the server domain answering a request is the same one she sent the request to.

Attractive to Hackers

Barak and Myers weren't trying to say that open source code is inherently unsafe. All code carries security risks. It's just that, culturally, users tend to trust open source code, especially the packages that everybody else is using.

"It's a tough problem to fix, but it's something we need to be aware of. Open source is getting to be mainstream," said Barak.

"Open source has characteristics that make it particularly attractive to an attacker," said Michael Pittenger, vice president of product strategy at Black Duck Software, in a separate conversation with SDxCentral. "It's ubiquitous; everybody uses SSL. The source code's available to an attacker. And we make a point of publishing vulnerabilities."

His company, Black Duck, scans registries or directories to find out just what software is in there. The scan is useful for taking inventory, but it can also be used to determine how much open source code a company is using — it can be more than twice as much as the company thinks, Pittenger told us — and how much of it matches known vulnerable versions.

Pittenger echoed Myers and Barak's comments that the speed of code development can contribute to some carelessness when it comes to security. "It's not that developers are against security. They're paid to deliver functionality," he said. "We're building awareness in that community."