Open source can protect your virtualised network. Here’s how.
Virtualisation has been a hot topic in telecommunications for nearly half a decade, and security concerns have remained an ever-present feature. This is not surprising given the extent to which NFV/SDN is transforming the industry and the many ‘known unknowns’ this entails.
As networks migrate from hardware to software, and ‘walled gardens’ turn into much more open cloud-like architectures, so security risks increase.
Throwing open source software development into the mix adds a further layer of complexity.
If large numbers of developers across the world can manipulate the code of a piece of software, doesn’t this increase the risk of malicious code being introduced? And if code is visible to anyone, isn’t it going to be much easier to find, and capitalise on, vulnerabilities?
The answer is no. Just because a software is open source it doesn’t mean that it is more vulnerable. Quite the opposite, in fact, says Liron Shtraichman, NFV R&D director, Amdocs.
Balancing the risks
While the risk of someone implanting a malicious piece of code is conceivable, the mechanisms of the open source community would ensure that it’s not there for very long. Open source code is inspected by a lot of people. I would go as far as to say that it is reviewed and audited much more than closed source code. So, while the risk is there in principle, any rogue code would get flushed out pretty rapidly.
When it comes to attacks from the outside, there is little difference between hacking an open source or a closed source application.
In the case of a probing attack – where hackers identity software weaknesses from the way the application responds to a range of prompts – the tools and processes used are the same for both closed and open source.
Another approach, static code analysis, is ostensibly easier in open source because the code is fully visible. However, the fact that the code is closed doesn’t mean that you cannot read it. There are enough tools in the market that allow you to decompile the code, so the hacker can find those soft spots without having the code itself. Hence the risk levels are no different between the two domains.
A global community to fend off threats
While this may sound counterintuitive, open source even has the potential to be more secure than a closed source application.
Why? Because in closed source settings, it is not generally in the vendor’s interest to disclose vulnerabilities. Fixing them can therefore take a long time – leaving users exposed over a prolonged period of time.
The transparency which characterises the open source environment translates into a much greater incentive to identify threats and root them out quickly.
It’s also less likely for vulnerabilities to creep in inadvertently. Compared to closed source developers, the open source community tends to invest much more time in polishing code before submitting it for inspection. After all, developers are putting their professional reputation on the line with a large, global audience.
Reaching critical mass
How effectively an open source project can stay on top of security risks depends entirely on the size of its support base. Without a sufficiently large number of developers to review and clean code, the community won’t be able to protect applications effectively. Ultimately, this could create more stumbling blocks for the adoption of NFV/SDN networks.
The author of this blog is Liron Shtraichman, NFV R&D director, Amdocs
About the author:
Liron Shtraichman is R&D director at Amdocs where he leads the NFV development teams. He has been at Amdocs since 2006. In his current role, Liron manages ONAP development, as part of the AT&T D2.0 collaboration with Amdocs.
In addition, he also leads the network unit transition into open source development and culture. In his prior roles in Amdocs, Liron took several managerial and technical positions, in the OSS domain, such as network rollout, and service fulfilment & assurance.
Comment on this article below or via Twitter: @ VanillaPlus OR @jcvplus