Intelligent network automation and an API-first approach are the answers to enabling NFV to make good on its numerous promises.
Network Function Virtualization (NFV) became one of the hottest topics in networking a few years ago, delivering promises of dynamic virtual networks and cost savings, and was being investigated and trialed in the labs of most major network operators. However, it’s no secret NFV has yet to see the explosive growth and anticipated benefits we all expected. For widespread NFV success to be realized, increased interoperability across vendor solutions and across individual components within those solutions is needed.
The NFV problem was never about a shortage of solutions. It was a problem of too many solutions that operated independently on specific components of the NFV environment with no interoperability. The operators bought into the vendor hype and were testing multiple tools, from multiple vendors, with each vendor touting theirs as the solution. The reality is that in its current state, the NFV solution is comprised of multiple tools in order to cover all use cases a network operator requires.
To support these claims, it’s important to take a step back and explore the problems currently facing NFV and how it got to this point. To date, NFV has been slow to deliver on its promises and the primary reason is the lack of interoperability between the multitude of tools that have been introduced. Instead of delivering on the vision of a single NFV Orchestrator that provides an integration point for existing OSS systems, and a single VNF Manager and a single Virtual Infrastructure Manager (VIM) to interact with the NFV Orchestrator, we have seen environments where specific uses cases leveraging NFV, such as SD-WAN and Virtual CPE (vCPE) have introduced multiple orchestrators, multiple VNF Managers, and often a hybrid cloud model that has resulted in a hodgepodge of management tools throughout the stack.
NFV challenges really began at the bottom of the stack with infrastructure. In the early days, the primary environment being touted for NFV use was OpenStack. Unfortunately, many companies had already invested heavily in VMware infrastructure, which presented some challenges with early NFV trials. These existing VMware environments were more suited for virtual servers and software used by IT, such as email servers and other legacy IT systems. As a result of the sunk investment in VMware and its lack of NFV suitability when compared to OpenStack, hybrid environments sprang up that included both OpenStack and VMware. This meant dueling management systems that included the management tools that came with each. Further adding to this complex hybrid environment was the introduction of container-based infrastructure and additional management tools. Add in all the public / private cloud options of AWS, Azure, and Google and the sheer volume of management tools become impossible to manage efficiently.
Issues also arose at the VNF Management layer. The VNF Manager orchestrates the activities of virtual network functions (VNFs) such as virtual routers, switches, and load balancers. For every vendor that introduced VNFs into the market, there was at least one VNF Manager introduced as well. This could be a new tool, or perhaps it was a new version of an existing management system being used for physical network elements that was modified for the VNFs. On top of that, the orchestration layer that contains NFV Orchestrators also saw many options from many vendors. Sometimes these options were packaged with the VNF Managers, and sometimes they were standalone orchestrators supplied by yet another vendor.
The problem is clear. A company likely has to manage an NFV environment that includes multiple orchestration tools within the NFV Orchestration and VNF Management layers and 3-5 infrastructure managers that each handle a particular segment of their hybrid cloud architecture. Each of these pieces also is supplied by different vendors that have little vested interest in out of the box interoperability with the others. All said and done, that’s approximately ten separate management systems just for NFV that don’t even communicate with each other out of the box!
Keep in mind, these multiple NFV management systems also must integrate with the existing network management tools that interact with the physical network, IT systems for ticketing, request management, and perhaps even billing.
Interoperability is king for NFV
Interoperability is critical to NFV adoption for two key reasons. The first reason is personnel-related. As described above, an NFV environment requires multiple new tools. Users are required to know them all and manually “swivel chair” from system to system to perform the specific tasks required in each. Here lies a gaping skills gap. There is a point where the engineers’ humanity comes into play, and they just can’t learn to efficiently use every single tool due to there simply being too many of them. This leads to specialization within the team where certain engineers become experts in a subset of the management tools. As a result, tasks that should be achievable by a single engineer can involve multiple people in order to be completed. In an effort to achieve the goals of NFV to streamline and cut costs, a complex environment is being built by operators that requires more people to be successful instead of less.
Second, from a network management perspective, it is difficult to maintain services that have pieces of its definition dispersed across a myriad of different systems, with no single system having a complete view. This has a direct impact on the ability to manage configuration changes, accurately charge end users (whether internal or external) for services used, and effectively monitor and assure the quality, performance, and reliability of the network. Federation of the data from each of the systems involved is a key aspect of interoperability. There must be a way to see a single view of all data about a service supported by the NFV environment, but not doing so by creating copies of the data.
The complexity of managing so many different components is an impossible task and a major barrier to NFV adoption. However, there is hope! Since these various solutions have primarily been developed in the last 6 to 8 years, they are designed with full exposure of their capabilities via APIs in mind. This provides the basis for a solution to the problem via intelligent network automation.
NFV’s holy grail? APIs.
Modern networks will include NFV-based networks and services. One of the keys concepts of the modern network is programmability. Programmability means having access to tools and the network itself in a manner very similar to the way we have been integrating software systems for many years now. Consistent APIs based on open standards allow communication across multi-vendor environments and effectively future-proof the network.
Every network will have multiple orchestrators, controllers, and other network management systems. By consuming the APIs of these systems and presenting a single, unified management layer that exposes their capabilities for use in intelligent network automation workflows, it is possible to abstract the complexity from the user. This abstraction closes the skills gap previously mentioned by eliminating the need for an engineer to have to understand the 10+ new systems added. That engineer is instead presented with a single tool that not only combines the capabilities of the NFV tooling into a common user experience but also can pull in the existing tools that have been used pre-NFV. The result is not simply negating the new complexity, but also eliminating existing complexity. Additionally, this approach enables federation of the data from each of the NFV management tools, and existing network management systems, providing a single view of the network.
Intelligent network automation and an API-first approach are the answers to enabling NFV to make good on its numerous promises. These approaches provide interoperability where none exists currently, thereby inserting the missing link to NFV success.
Originally published on Network Computing.