Margaret Chiosi, distinguished network architect at AT&T Labs, shared those insights during an NFV World Congress talk on Wednesday. Her thoughts were influenced not only by AT&T’s internal work but also by her role as the “front door” to AT&T’s Domain 2.0, the program that’s bringing newer, smaller suppliers into the fold.
For starters, AT&T has split SDN controllers into categories. There’s the global controller with an end-to-end view — OpenDaylight being one example. Separately, there are local controllers that interact with a particular piece of equipment, as exemplified by Nuage Networks, VMware‘s NSX, or Juniper‘s OpenContrail (the examples Chiosi cited).
AT&T has been developing its own global controller, using OpenDaylight as its framework. And in doing so, the carrier has decided to extend its controller’s reach to Layers 4-7, going beyond the usual concept of an SDN controller that spans Layers 0-3. AT&T is even going to back to application-layer vendors and asking them to put adapters into OpenDaylight, Chiosi said.
AT&T made this decision because some of the tasks required at the upper layers don’t differ much from what can be done at lower layers, she said.
“We’re realizing everything’s metadata,” Chiosi said. “Once you virtualize, it all looks the same — it all needs access, and it doesn’t matter if it’s a VoIP application or a router: At the end of the day, it has some set of metadata that has to get acted upon.”
When it comes to local controllers — the non-OpenDaylight ones — AT&T is already deploying some into the cloud, Chiosi said, implying that multiple vendors are being used. These are cases where the controller works closely with a virtual router, not a virtual switch; AT&T believes a virtual router is “more effective” for its SDN purposes.
One Page for SDN/NFV
AT&T’s research has convinced Chiosi that the industry has to find some common ground, rather than invent the same concepts repeatedly. “If we can just get on one page, we can go faster,” she said.
The first step is to agree on what functionality is even necessary — a common framework — and then start categorizing use cases, because many share the same requirements; they don’t each have to be built from the ground up, she said.
The harder step after that would be to settle on the rules for “flows” — deciding which elements (controllers, cloud orchestrator, service orchestrator) talk to each other in order to get tasks done.
AT&T has done this exercise for multiple cases. It’s been doing so using a DevOps-style model: Produce code immediately, find what’s not working, and make revisions quickly.
“We’ve moved away from this process of having an architecture team go through and figure out all the best ways for doing this,” she said. “As we have all these projects, and we have a lot of projects, we are building upon our knowledge of what we did for a certain set of projects and tweaking where it makes sense.”