In the world of transceivers, change is the only constant — and we are in the early days of a step- change thanks to advances in artificial intelligence.
I’ve just reached my 20-year anniversary with Finisar/II-VI/Coherent. As many of you know, Finisar (now Coherent) was a pioneer in pluggable transceivers so much so that the name Finisar became almost synonymous with transceivers.
From telecom networks to enterprise datacenters to web 2.0 hyperscalers, much has changed over the past two decades because of evolutionary and revolutionary changes in the key market drivers.
Today, we’re seeing another major market transition, namely the dramatic growth in artificial intelligence (AI) and machine learning (ML). These applications will define the next chapter in the optical transceiver story – a chapter we are already writing here at Coherent.
It’s an important story because transceivers are a critical if invisible part of the modern world we live in. Whether we realize it or not, most of us use the fiber optic network and transceivers on a daily basis.
One simple example is using a search engine. Did you ever think about what happens between when you type a search query and when you get the results back? The average search query travels hundreds of miles to a datacenter and back over the fiber optic network. Inside the data center, a single search query uses hundreds of computers to retrieve an answer. These computers are networked together using optical fiber. Optical transceivers perform the vital function of converting the electrical signals into optical signals and back again. So, if you ran a search today, you used the fiber optic network and, very likely, the signals ran over Coherent transceivers.
Why AI innovation demands networking innovation
Let’s start with a bit of market perspective.
I’m sure most of us have read about or used ChatGPT from OpenAI, Bard from Google, or Bing from Microsoft. In fact, OpenAI’s ChatGPT has been called the fastest-growing app in history, reaching 100 million users in only two months.
But what does this have to do with transceivers, you may ask?
The AI must be trained on an existing dataset, which can contain billions of parameters. This requires significant compute power, which is distributed over tens of thousands of processors. To address these new requirements, datacenters are fundamentally rearchitecting and adding server and networking equipment specifically dedicated to AI and ML.
The front end of the network (Level 1) remains the traditional architecture of spine switches connected to leaf/top-of-rack (ToR) switches. A new accelerated compute portion of the network (Level 0 or back-end), which consists of AI/ML servers and accelerated compute devices, bolts onto the traditional network alongside traditional compute and storage. Optical interconnects including transceivers are used at all levels in this network.