I bet that title got your attention didn’t it? Almost every post regarding SDN (not just on our site) has been touting the wondrous future made possible by Software Defined Everything. In spite of the title, I am not changing my tune on that at all. I still think if you intend to be a player in the next 5-10 years you had better at least have a strategy in mind for how you are going to move forward with SDN, NFV and all of the various technologies involved. If the development of that strategy is not at least in process you are behind right now…you have been warned.
Today my thoughts have drifted over to the things you should be worried about as you start your SDN journey. These worries shouldn’t be considered show stoppers to your journey, but instead can be looked at as success factors in your journey. A key early phase activity in any program should be defining what success looks like. Use these topics to guide you in that definition, among many others that we will surely touch on before long. There are 4 key areas I’d like to address briefly: latency, scalability, vendor lock-in, and maturity of the technology and tools involved.
The first two areas are intertwined, but I’ll touch on each of them separately. First is latency. If you were around in the early days of virtualization you are well aware of the performance issues involved with early VMs and the restrictions on just how many of them you could house on one server before it became an issue. As we move toward Software Defined Everything the (albeit improved) hardware underlying all of this virtualization is going to get pushed hard. I expect there to be a few stumbles here and there as the industry learns the impact of moving this technology from the lab and into production and innovation takes place to resolve the issues discovered. The trick is making sure you do the lab work; crawl before you try to run into customer facing use cases.
This leads us directly to the scalability question. Just what is it going to take to make a scalable SDN solution that will immediately realize all of the expected promise? I am not sure anyone can give you the boilerplate, all-inclusive answer right now. There is the physical connectivity, the Controller capabilities, the management capabilities, and cloud implications of adding all of this virtual network gear to the traditional IT cloud you may be accustomed to managing. As above, the answer is do your lab work. You have to create an environment that mimics production as closely as possible and push the limits as hard as possible. Push to the point of breakage. The results of this testing must then be funneled into a solid planning process that can design the environment you will launch with an eye on how it will grow. In the old networking world you could almost plan by literally counting the physical elements and managing to the capacity involved. In a world where a new element can be spun up in minutes to accommodate a new customer or network need it is vital to understand the impact on the underlying infrastructure.
Next is the issue of vendor lock-in. In spite of all the talk about open standards and interoperability we still see vendors building solutions that work best, and sometimes ONLY, if you purchase the entire ecosystem from them. It is easy to understand why they are stuck on this road. Who would want to build a solution that encourages your customer to go buy your competitor’s product as well. Key questions you must pose to your vendors in the early stages are things like:
- Can x component be from another vendor?
- What open source components do you use?
- Do you contribute enhancements to these open source components back to the community?
- If we must use your components, do you at least integrate with the similar components from other vendors to allow us to have a multi-vendor environment?
- What interoperability testing have you completed?
- What industry standards do you adhere to?
Open source is playing such a large role in this domain (OpenStack, OpenDaylight, OVS, ONOS, etc.) that I can’t stress enough that you should push your vendors in that direction. It is amazing the innovation that occurs through community effort such as those mentioned above. Most of the major vendors are already playing huge roles in these communities, but the only ones that can keep them honest in this area are the customers asking for these solutions.
Lastly is the question of technology and tool maturity. This is a tricky one because some tools are much further ahead than others. For example, OpenStack is a pretty mature solution at this time, especially when you look at solutions from companies such as Mirantis that have built commercial offerings based on it. Some others are further behind overall, such as OpenDaylight. However, they are gaining ground fast, and, for certain use cases, are 100% ready for prime time today. For example, ConteXtream’s Service Function Chaining (SFC) solution has recently moved to OpenDaylight for its Controller and Juniper utilizes OpenContrail for its SFC Controller solution. I hate to sound like a broken record, but the lab is your friend. Make the vendors prove their solution to you before you lock yourself into a particular answer.
Once again, the lab is your friend (get the point yet?), and the Proof of Concept exercise is the background check you should always perform prior to hiring a vendor to leverage your future against. So, buyer beware, but don’t be afraid to start taking the steps down the road to SDN. Just do so in a controlled and wise manner.
Tags: Automation Strategy