Interconnect innovation key to soaring bandwidth demands • The Register
Every year the bandwidth that telecom networks carry increases by roughly 30 percent. To keep up, the interconnects on which these networks are built are going to need to get a whole lot smarter and more capable before long, BT’s Andrew Lord said during his Hot Interconnects keynote early this week.
Lord, a senior manager of optical research at the British telecommunications giant, describes fiber as “a massive 21st century project for the planet in the same way that 100 years ago that copper was a massive project for railway train networks, water networks, gas, and electricity.”
But as fiber becomes more pervasive, branching off to people’s homes and businesses, it’s becoming an ever larger headache for telecom operators as they grapple with surging bandwidth demands year after year.
“Our capacity grows at 30 percent a year and we’ve seen that for the last 20 years,” Lord said. “I’m seeing no reason why that should slow down, and I’m seeing many reasons why it might speed up.”
This hasn’t been all that big of a problem, at least for the past two decades, thanks to technologies like wave-division multiplexing, which have enabled optical engineers to cram more wavelengths – colors of light – into a single fiber pair. The problem is we’re approaching the limits of that technology as it’s deployed today, Lord explained.
A smart interconnect stopgap
One of the ways Lord says network operators can get around this limitation is by taking advantage of the fact the interconnects have got a lot smarter over the past few generations.
“A transceiver, or a laser, a pluggable, or a line card is capable of so much more than it was,” he said. “It’s capable of flexing its boundaries. You can have it running 400Gbit/sec or 600 or 500 and you can flex it.”
The way modern optical networks are run today is a large portion of the fiber’s capacity is dedicated to spare margin.
You can think of this sort of like a highway flanked by deep ravines on either side. Stay in your lane and you’ll reach your destination, but deviate even a little and plunge to your demise. Obviously, this isn’t ideal since it leaves no margin for error. So the highway has shoulders along its sides to provide some leeway, and the wider the shoulder, the larger the margin of safety.
The same sort of logic is applied to how data is transmitted over optical fiber. Service providers make an educated guess as to how environmental factors and equipment age will impact the reliability of the connection over time and build in a generous safety margin.
This is actually one of the ways optical vendors can juice their performance figures in trials by effectively shrinking the spectrum margin to a degree you’d never find in a production network.
By coupling smart, coherent optical transceivers with AI/ML algorithms, Lord says, service providers can start resizing the margin dynamically to open up more bandwidth, or compensate for rising errors.
“We can really back off on that margin because we’re getting real-time information on our fiber loss, our fiber performance, our dispersion, our non-linearity, or the transceiver parameters,” he said.
For example, a brand-new network could be run with a thin margin to achieve the highest capacity, but as the equipment ages, or as environmental factors change, the margin can be increased to accommodate for additional variance and loss.
Going back to the highway analogy, this would be sort of like opening the shoulder to traffic during rush hour as long as the weather conditions are clear and doing so is unlikely to cause accidents.
“How much capacity potential is unlocked? I think it’s vast,” he said. “I think that many, many links in our networks have multiple dBs of spectrum margin that converts into doubling and quadrupling capacity.”
However, all of that monitoring requires a full ecosystem of smart interconnect tech that can feed into AI models and provide operators actionable insights.
Data is what you make of it
Until recently, Lord says, the problem hasn’t been whether you can glean insights into your transceivers, but rather what you do with all that data.
“In the past we’ve had optical networks that have generated vast amounts of data on their performance, which we’ve just thrown away simply because there’s nowhere to store it,” he said. “That’s changing because of AI.”
Lord doesn’t expect service providers to get comfortable with AI-controlled networks any time soon, however. “It would be a very brave operator to hand their entire reliability of network governance and management over to an AI machine,” he said.
Instead, he sees an opportunity to use digital twins to simulate the network in real time, and experiment with new configurations in a safe way before implementing them in production.
This isn’t an easy task, he notes. “Many of the issues you have are related to imperfect bends and imperfect installation, so there are a lot of things that we have to worry about that aren’t at the application layer,” and have to be accounted for when building a digital twin.
However, once in place, this data could be used in conjunction with pattern-matching algorithms to glean insights into more than just network performance and reliability.
While reducing the spectrum margin may buy network operators time, at some point, capacity demands will once again reach a tipping point, Lord explained.
One relatively simple option would be to increase the number of fiber pairs used across each span.
“Maybe I just put lots of fibers in. Honestly this is probably a very viable solution. Fiber is cost effective. You can put in massive ducts full of 1,000 or more fibers,” Lord said.
The caveat of course is more fibers require more transceivers, larger more power-hungry equipment, and people to actually manage it all.
How much capacity you can cram into a single fiber depends heavily on the distance the data has to travel. “If you want to go a short distance, over your complete fiber spectrum, you might be able to get half a petabit. If you want to go a long way it will be much less than that,” he explained, adding that at a 30 percent year-on-year increase to traffic, those numbers will start to look really small very quickly.
One alternative is to cram more spectrum bands into the fiber itself. This, Lord says, will drastically increase the amount of capacity of a single fiber pair at the expense of greater complexity.
“What does that mean for interconnects? Well, suddenly you’ve got a fiber coming in with five times as many wavelengths as it had before,” he said. “That’s something the interconnect community needs to get their heads around.”
PON on steroids
While this may address some of the complexity for the core network, Lord argues there are still opportunities for innovation when it comes to getting fiber to the growing number of homes and businesses.
One promising technology are transceivers like Infinera’s XR optics, which allows a 400Gbit/sec optical signal to be split passively into multiple smaller optical data streams.
“It’s very much like a super PON [passive optical network] on steroids,” Lord said in reference to a common optical technology used in consumer fiber deployments.
However, the tech isn’t without challenges, he notes. One of the bigger ones is that it blurs the separation of the physical layer and the IP layer into one.
According to Lord, this will require appliances to start handling many of the physical layer processing to ensure traffic from one endpoint is routed accordingly. The benefit, however, is a substantial reduction in power consumption.
The more you can integrate and co-package these things, put your optics right next to your electronics, the more power you’re going to save.
In the end, Lord paints a picture in which innovations in interconnect tech will be pivotal to the long-term success of the telecom industry. ®