Why Intel Sees Its Future In Heterogeneous Computing – Innovation Excellence

In 1936, Alan Turing’s published a breakthrough paper describing a universal computer which could be programmed to do any task. Essentially, he argued that rather than having different machines for different tasks, a single machine, using a system of ones and zeroes, could be programmed to do any task.

Today, we can see Turing’s vision writ large. Digital technology pervades just about everything we do, from producing documents to navigating the physical world. Although the basic technology has evolved from vacuum tubes to transistors to integrated circuits, modern computers are essentially scaled up versions of that initial idea.

Yet even the most powerful ideas have their limits. While it is true that digital computers can perform almost any informational task, the technology is approaching theoretical barriers and we can no longer rely on a single technology to power the future. At Intel, scientists are working to create a new vision in which computing is no longer universal, but heterogeneous.

Intel’s Challenge

Few companies have benefited from Turing’s vision than Intel. In 1959, the company’s co-founder Robert Noyce helped pioneer the integrated circuit. In 1965, Gordon Moore came up with his eponymous law that predicted that the number of transistors on a microchip would double every two years. For half a century, the company has prospered by cramming more and more transistors onto silicon wafers.

Yet every technology eventually hits theoretical limits and that’s where Moore’s law stands today. There are physical limits to the number of transistors that can fit in a limited space and how fast we can send information through them. We will likely hit these limits in the next 5-10 years.

As a general rule, businesses that owe their success to a single idea or technology don’t survive past its relevance. Kodak, despite what many assume, invested significant resources in digital photography, but couldn’t replace the enormous profits it made from developing film. When Xerox’s copier business was disrupted, the company lost its dominance. The list goes on.

The odds would seem to be stacked against Intel, but the company has embarked on a multi-decade plan to rise to even greater heights. The strategy rests on three basic pillars: optimizing traditional chip architectures for specific tasks, shortening the distance between chips and inventing new computing architectures.

Optimizing Digital Architectures

While Turing proved that a universal computer could perform any calculation, that doesn’t mean that’s the best or most efficient way to do it. Think about all the things that we do with a computer today, from writing documents and preparing analyses to watching videos and playing games and it becomes obvious that we can improve performance through specialization.

Intel has invested in two technologies that optimize chip architecture for specific tasks. The first, called ASIC, is hardware-based. The chip is designed at the factory to perform a particular function, such as run an AI algorithm or mine bitcoin. It can greatly increase efficiency, but obviously reduces flexibility.

The second technology, called FPGA, is software based, which provides much greater flexibility. So, for example, at an e-commerce data center a chip can be optimized to process transactions during the day and then reprogrammed, in microseconds, to analyze marketing trends at night. Deploying ASIC and FPGA chips can improve performance by as much as 30%-50%.

In a heterogeneous computing environment, ASICs and FPGAs provide very different roles. ASICs are best suited for applications for which there is a large addressable market, which is why Google and Microsoft use them to run their AI algorithms. FPGA’s are more useful for smaller scale applications in which the economics don’t favor devoting an entire fab to their manufacture.

Integrating The Integrated Circuit

The von Neumann architecture has long been the standard for how computers are run. It consists of a set of chips, including a central processing unit, a control unit, and memory chips as well as other types of chips that provide long term data storage, graphics capability and so on.

These provide a computer with full functionality, but come with a built-in problem. It takes time for information to travel from one chip to another. At first this wasn’t a big deal, but as chips have become faster they need to wait longer and longer, in terms of computing cycles, to get the information they need to do their work.

This problem, known as the von Neumann Bottleneck, has stymied computer scientists for decades. Yet in January, Intel announced that it had solved the problem with its new Foveros technology, based on decades of research, which employs a method called 3D stacking.

Essentially, 3D stacking integrates the integrated circuit. In a typical chipset, different types of chips, such as a CPU, a memory chip and a graphics chip are set up side-by-side. The company’s new Foveros technology, however, is able to stack chips vertically, greatly reducing the distance between chips and improving overall performance.

Inventing New Architectures

The third, and most ambitious pillar, is inventing completely new computing architectures, where the company is investing in two futuristic technologies: neuromorphic and quantum computing.

Neuromorphic computing involves chips that are based on the human brain. Because these chips do not compute in a linear fashion like conventional chips, but are massively parallel, they are potentially thousands of times more computationally efficient for some applications and millions of times more energy efficient. Likely applications include robotics and edge computing.

Quantum computing leverages quantum effects, such as superposition and entanglement, to create almost unimaginably large computing spaces that can handle enormous complexity. Potential applications include large and complex simulations, such as those of chemical and biological systems, large optimization problems, like in logistics and artificial intelligence.

Both of these technologies have the potential to power computing for decades to come, but neither is commercially viable today. Executives at Intel told me they expect that neuromorphic chips could possibly begin to have an impact within five years, but quantum computing may take as long as 10-15 years.

Preparing For The Challenge Ahead

Decades from now, we will probably come to see the digital revolution as a quaint, simpler time. Every few years a new generation of computer chips would come of the line that worked exactly the same as the previous generation, but were better, faster and opened up entirely new possibilities. This gave entrepreneurs and product designers a high level of predictability.

For example, when Steve Jobs imagined the iPod as “a thousand songs in my pocket,” he knew it wasn’t technically feasible. But he also knew that it would be in just a few years. So he waited for a hard drive with the technical specifications he needed to come on the market and, when it did, bought the whole production run.

Yet those days are ending. “We see the future as heterogeneous computing,” Mike Davies, Director of the Neuromorphic Computing Lab at Intel told me. “In the future we’ll be using different architectures, such as quantum, neuromorphic, classical digital chips and so on, for different applications. That will enable us to fit the tool to the job that much more effectively.”

That idea has great promise, but it also presents great challenges. We will need to design systems that optimize specific operations, but are not so inflexible that we can’t scale them. With new architectures like neuromorphic and quantum, we will also need to develop new programming languages and algorithmic approaches.

That’s the challenge that heterogeneous computing presents. It will usher in a new era of technology that is far more powerful, but also far more complex than anything we’ve ever seen before.

Image credit: Pixabay

Wait! Before you go…

Greg SatellGreg Satell is a popular author, keynote speaker, and trusted adviser whose new book, Cascades: How to Create a Movement that Drives Transformational Change, will be published by McGraw-Hill in April, 2019. His previous effort, Mapping Innovation, was selected as one of the best business books of 2017. You can learn more about Greg on his website, GregSatell.com and follow him on Twitter @DigitalTonto.