AT&T has been one of the more aggressive telecommunication providers in terms of software migration as it looks to gain greater control over network assets using software-defined networking (SDN), network functions virtualization (NFV), and cloud technologies.
That aggressiveness has included driving a stake in the ground in terms of how much control it’s expecting to gain, with AT&T claiming to have hit 34 percent SDN control at the end of 2016, on its way to 55 percent control by the end of this year.
Andre Fuetsch, president of AT&T Labs and CTO at the telecom giant, recently spoke with SDxCentral to provide an update on its software work and how those efforts are set to bolster the carrier’s 5G plans. The carrier expects to hit even greater control of its network using software quicker than previously forecast. This interview has been edited for clarity.
What have you seen as some of the more important features of AT&T’s ongoing work in virtualizing network assets as part of its Domain 2.0 program?
Fuetsch: We’ve been on this software-defined journey for quite some time now, and there’s really three core components that go with this program: virtualization, which is basically virtualizing these network functions; control, which is what level of SDN control do we have for that function; and automation, which is how much of that particular function have we automated in terms of the care, feeding, support, and monitoring.
The reason our SDN journey is so important is that everything we are doing to virtualize as much of the network as possible is also to control it more efficiently. It’s not a binary yes or no, it’s a wide spectrum. What I mean is you can take an existing network function from the old world and just port it right over and run it on top of a cloud stack, but that gets a low score. To get a high score you need to re-architect that function and make it more cloud native.
On the SDN control side it’s also not a binary answer. You can have some rudimentary static SDN control like some basic provisioning, or you can have much more dynamic real-time control.
On automation there’s the same deal. You can automate a few things about the function or you can automate a lot more, and even automate a cluster or service chain of functions together.
How should people view the connection between the Domain 2.0 program and AT&T’s more recent push behind the deployment of 5G technologies?
Fuetsch: What I just talked through builds the foundation for 5G. As an example, one of our mobile packet cores is a fully virtualized mobile packet core. As we are moving to 5G, it’s a matter of how do you control those assets? How do you automate those assets? How do you get the operational efficiencies you want? How do you get more speed and service delivery? That’s everything we’ve been doing with our program.
The big lynchpin that holds this all together is ECOMP, or what is now called ONAP. We’ve got a few large-scale operators on board, and frankly a pretty big pipeline of many others that are going to come on board as well. We really view ONAP as critical as to how we build out the NFV portions of the packet core of 5G, and even the RAN as well.
How important is it to have the software pieces in place to support what are the real 5G use case scenarios we hear about in terms of the connected car and network slicing?
Fuetsch: We believe we are going to be in a very advantageous position here with 5G because we already have this foundation out there that we are running in production today at scale. We can leverage that as true 5G comes to fruition.
As true 5G and standards get codified by the end of the year, and the specifications, the silicon, and the equipment get built we hope to be in a position to start deployments as early as late next year. We are in a pretty unique position because you are going to need these SDN surrounds to support all of this dynamic capability, the real-time control, and the network slicing features.
Have you been happy with the pace of development of virtualized platforms from the vendor community in terms of your software migration plans?
Fuetsch: It’s getting better day-by-day. Certainly it’s not going as fast as we want, and we put a lot of pressure on our supply chain to deliver, but it depends on what particular area.
With ECOMP there was nothing off the shelf that we could find at that time to really drive us to this model-driven architecture of automation and orchestration platform that we needed. So we had to go out and just build it ourselves. Then we realized this platform is pretty big, complex, and needs to evolve at a faster rate than we could do ourselves, and that was the impetus behind why we wanted to open source it.
If we put enough of a framework out there but not give away our secret sauce, we think it will provide enough value and a standard that the entire industry can coalesce around. And we could all contribute to the platform in a way that would advance the tide that would lift all boats. We could still keep our competitive edge as we were the first into production and market.
The biggest value for us was to expand the developer pool and community and get more contributions and more commits. And this is not just for us, but also for the whole industry. We are seeing a lot of interaction with the webscale companies through the Linux Foundation with interest in wanting to get involved.
Think of the big use cases: AR, VR, and autonomous cars. Think of the webscale companies interested in those. What they are finding out is to make those use cases a reality 5G is going to be real important. Mobile operators are going to be a very important part of the equation. That’s why they want to get involved with what the architecture will look like.
We donated a huge code base to the community and obviously we have some influence with it, but it’s really in the community’s control now.
How is AT&T progressing on its software migration plans in terms of hitting its previously stated goals of controlling network assets using SDN?
Fuetsch: Our ambitions are not going to stop at 75 percent. We’re going to go beyond that. When I set that goal three years ago we felt there were network elements that would still be in the network that would not make economic sense to pull out as quickly. But, what we are seeing is that as we are getting closer to 2020, I predicted that we would actually raise the goal.
Now certainly we knew we would not get to 100 percent as there’s always going to be some of those straggler technologies where the economic justification to pull them out of the network would not make sense, but that those would just be a small number. But, I would say for the most part you can expect us to go much further than 75 percent, and at the rate we are going you can expect us to have a good chance to surpass 75 percent before 2020.