Back in the 1960s, Gordon Moore — the future founder of Intel — made a bold prophecy: Every year, the number of transistors per square inch on integrated circuits would double (he later recalibrated his prediction to every two years).
For more than 40 years, what became known as Moore’s Law has proved correct — and its implications have been stunning. As an analogy, according to the Semiconductor Industry Association, if you applied Moore’s Law to an average car engine, in 20 years that car would have the horsepower of a large passenger jet. Since the early 2000s, however, the promise of Moore’s Law has slowed significantly. Silicon transistors and wires can’t get much smaller, and even if they could, their size no longer generates big gains in speed or efficiency.
At the same time, demand for better chips keeps rising. The volume of data we produce continues to expand exponentially, and burgeoning applications such as artificial intelligence and machine learning require tremendous computing capabilities.
Chip specialization represents one solution to the need for greater computing power. Traditionally, chips have all been the same no matter their purpose, but that’s starting to change, says Neil Thompson, assistant professor of innovation and strategy at the MIT Sloan School of Management.
“We don’t all drive the same car. We’ve got 18-wheelers, sports cars, SUVs, each designed for a specific purpose,” he says. “The same specialization is happening in chips.”
An example is the graphics processing unit. Originally designed to run programs that required complex graphics, GPUs can be used for other activities, such as data analytics, engineers have found.
Still, that specialization involves trade-offs. Enormous computing infrastructures were built based on the fact that chips were the same. Those structures will now require a more complex hierarchy. Organizations may also need to move some computing processes to the cloud, where they can rent or leverage space in a cloud-based data center with chips that fit their needs.
Manufacturers are experimenting with stacking chips to increase the density of transistors in a small space. Scientists are also researching materials other than silicon. Several have proved successful in a lab setting, but it’s unclear whether those substances will be affordable or scalable, or perform at the levels users have come to expect.
Coming up with a replacement for silicon won’t be easy, says Horst Simon, deputy director of Lawrence Berkeley National Laboratory.
“Right now, the coordination isn’t there,” he says. “We need a much longer time frame and all-around support from the government and the industry to come up with the next semiconductor material.”
The end of Moore’s Law won’t affect the consumer or data center immediately. Thompson and many other experts think it will be five or 10 years before the average person experiences the impact. By that time, one or more of the proposed solutions should be taking root. It may add complexity or require different tactics to reach the levels of computing power we need, but the industry is working hard to find the next solution.