Roughly three years in, how is network functions virtualization (NFV) doing?
A Monday afternoon panel at Oracle OpenWorld in San Francisco tackled that question by dividing NFV’s achievements and trends into Good, Bad, and Ugly. It was the brainchild of Douglas Tait, an Oracle director of global markets, who brought along friends from Dialogic and Intel to help write the lists.
The point wasn’t that NFV is doing badly. The good/bad/ugly framework was more a way to gauge progress so far and to spur discussion about how much work is ahead.
Items for each category were chosen by all three panelists, with the audience invited to suggest more. One note: I’ve kept the discussion mostly in the order that it happened, which means there’s a bit of good, bad, and ugly in each section.
What distinguishes “ugly” from “bad” is that the ugly items are just plain worse and might not ever get fixed.
Take standards enforcement, for example. Even in normal networking, equipment vendors and service providers can’t resist tweaking technology so that it obeys the letter but not the spirit of the law.
Sometimes it’s not even that subtle. Tait participated in the early days of standardization for the Java language — but that work was upended when carriers such as AT&T deviated from the standard, wrecking havoc on vendors. “I see the same cast of characters” in the NFV effort, he said.
At the moment, NFV might also be suffering from an overabundance of standards — or maybe more appropriately, of standards and open source organizations trying to advance the concept. One slide during the talk listed at least eight organizations, including ETSI, OPNFV, the OpenDaylight Project, the IETF, and the MEF. There’s a lot of activity that ultimately needs to coalesce.
“Open source has always been looked at as a bastion of speed, and I don’t know if it’s turning out that way in the market. That’s because of competing standards,” said Sean Varley, Intel’s director of open networking.
“We showed the ETSI diagram. It’s a model. Nobody really implements it that way, but a lot of the vendors are trying to fit their products into those holes,” Varley said.
Varley noted one good thing that’s come out of the ugliness: a spate of partnerships, as every player starts to realize it can’t do all of NFV alone.
A few of the “bad” items simply reflected the youth of NFV. It’s not clear if all this open source code is going to be carrier-grade, for instance — and no end-to-end NFV platform yet exists (although Oracle, in other OpenWorld sessions, bragged of owning every piece necessary for an NFV architecture).
The lack of an all-in-one implementation leads to the question of responsibility, the proverbial single throat to choke. “That is one aspect of this where it is very difficult to see how it’ll play out at this juncture, because it’s not a technology question,” Tait said.
Then again, because it’s not a technology question, the issue is certain to be solved eventually — in fact, the throat will belong to the one vendor or integrator that the customer is paying money to, said Jim Machi, executive vice president of product management at Dialogic.
Systems integrators seem likely candidates for the role, but they were slow to join the conversation. “I think SDN and NFV sort of took the systems integrators by surprise,” Varley said. “They’re going to have to retool.”
One audience member pointed out another sticking point: NFV isn’t saving anybody money yet. Right now, it’s more expensive than using traditional hardware systems.
It’s considered a temporary effect, because every new technology comes with a price. But it’s a problem worth acknowledging, because “there are clearly expectations NFV is going to be cheap,” Machi said.
Attitudes might be shifting. One audience member noted — and I’ve heard this as well — that carriers are more excited about NFV’s prospect of service agility, rather than cost savings.
Tait left “good” for last, to end the panel on a happy note. And that’s justified; NFV has made a lot of progress in three years:
- Performance and scalability have matched those of hardware, in some cases
- Proofs-of-concept are voluminous and include real, production products
- Despite the mention of costs as “bad,” there’s evidence that NFV really can lower capex and opex.
Tait added that NFV’s seemingly gradual emergence is a good thing, because it’s blocked any idea of a forklift upgrade for NFV. The customer/vendor dialogue was nurtured enough for a hybrid model to emerge, where pockets of NFV would sit alongside existing networking gear.
“It allows security measures to evolve into it as well,” Tait said.
The embrace of open source among carriers has been another good effect, Varley said. People are forging ahead with ideas and not waiting for standards, and that’s put some serious RFPs on the table already. Some NFV-driven services are even available from AT&T and Telstra, said IHS Infonetics analyst Michael Howard, who was in the audience.
The downside to that speed is that carriers might fork, picking divergent NFV architectures before standards are cemented. “That’s a risk that you take,” he said.
Finally, there’s the fact that NFV is so broad it forces organizations to cooperate. In service providers where the networking team is isolated from the IT group or the OSS team, cross-pollination will be mandatory. “I see NFV as one of the forcing mechanisms that brings them together,” Tait said.
Photo: From a pumpkin carving exhibition at San Francisco’s Marriott Marquis, where this panel session was held. Identification of which pumpkin is good, bad, or ugly is an exercise left to the reader.