Gordon E. Moore

Gordan Moore

In April of 1965 Gordon Moore, who was the Director of Research and Development at Fairchild Semiconductor Company at the time, was asked by Electronics magazine to predict what was going to happen in the semiconductor components industry over the next ten years. His response was a brief article entitled, "Cramming more components onto integrated circuits". In it he predicted that the number of components of all types in an integrated circuit would double every year. See the photo of a young Gordon Moore to the left.

Moore opened the paper with a bold statement: “The future of integrated electronics is the future of electronics itself.” While that claim seems self-evident today, in 1965 it was controversial. Many people doubted that the integrated circuit would ever fill anything more than a niche role. Although the first integrated chips were more compact than their hand wired brethren, they cost significantly more. Only a handful of companies were making integrated circuits, and their only real customers were NASA and the U.S. military. From his trend line on the graph in his paper, Moore predicted the doubling would continue for 10 years and the number of components per chip would go from about 64 to about 64,000.

In 1968, Gordon Moore and Robert Noyce decided to quit Fairchild Semiconductor and create their own company - Intel Corporation. Robert Noyce typed up a one page summary of what they wanted to do in the new company. That was enough to convince San Francisco venture capitalist Art Rock to back Noyce and Moore in their new venture. Rock raised $2.5 million dollars in less than 2 days by selling convertible debentures. Art Rock also became the first Chairmen of Intel.

Gordon Moore had the projections in the ballpark, but not quite right. By 1975, Intel was making memory chips with about 32,000 components. So in 1975 he revised the formula's forecast doubling time to be every two years and changed the count to the number of transistors instead of all components.

Along the way it became the now famous "Moore's Law" when shortly after 1975 Caltech Professor Carver Mead popularized the term. Since then his predictions have proven to be super accurate, in part because Moore's Law is now universally used in the semiconductor industry to set future targets for research and development and to help forecast capital expenditures.

Doubling every two years is an exponential progression and after many, many years it had shown no signs of stopping. Today it describes a remarkable 50 year streak that has given us countless forms of computers, smartphones, personal sensors, and other devices. The impact of Moore’s Law on modern life can not be overstated. Most modern radios, TVs, cars, medical equipment, etc. have been made possible by the seemingly never ending progress of miniaturization of transistors and other electrical components.

Gordon Moore is now retired and living in a house on the beach in Hawaii. He is focused on philanthropy through the Gordon and Betty Moore Foundation.  Top

Moore's Law In Practise

Moore's Law

Predictions of the death of Moore’s law are nearly as old as the forecast itself. Still, the law has a habit of defying the sceptics, to the great good fortune of those of us enjoying tiny, powerful consumer electronics.

Shown in the chart to the left is the transistor count in computer chips from 1971 to 2011. In spite of predictions every 10 years that Moore's Law would fade away or slow down, clever engineers have found ways to keep it going.

Some people see Moore's Law in several evolving stages. For example in Moore’s Law 1.0, progress came by “scaling up”, i.e. adding more functions and components to a chip. The goal was simply to gobble up the discrete components surrounding the computer chip and put them in one reliable and inexpensive total package. As a result, chips got bigger and more complex. The microprocessor, which emerged in the early 1970s, exemplifies this phase. However over the last few decades, progress in the semiconductor industry has become

Transistors

dominated by Moore’s Law 2.0.

Moore’s Law 2.0. is all about “scaling down,” driving down the size and cost of the transistors. It began in the early 2000s, when an unpleasant reality started to emerge. At that time, transistor sizes crept down below 100 nanometers. Transistors became so small that electrons began to tunnel (leak) through them when the current was supposed to be off.

Although new materials and manufacturing techniques helped combat this problem, engineers had to stop the practice of dramatically lowering the voltage supplied to each transistor in order to prevent the tunnelling.

As a result, for the last decade or so, Moore’s Law has been more about cost than performance. See the chart above that shows the total number of transistors made every year in red and their cost in black. Transistors continued to be made smaller, but that was in order to make them cheaper as opposed to making them faster. Overall there have been significant design improvements, but a large portion of the performance gains has come from the integration of dual computer engines on a single chip - called dual cores. Dual cores was enabled by cheaper transistors and much more sophisticated software.  Top

Is Moore's Law Coming To An End?

Gordan Moore 2

In April 2005, Gordon Moore stated in an interview that the Law's exponential projections can not be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens". He also noted that transistors eventually would reach the limits of miniaturization as they approach the size of an atom. So what are some of the constraints currently giving engineers headaches?

So it is most likely that Moore's Law will continue for at least 10 more years (to the year 2025). Along the way, it will probably slow down to doubling every three years instead of two. Not only are the technical problems more and more difficult, but the expense of new fabs is incredible. Samsung is spending $15 billion on its newest facility now underway and IBM has joined a consortium in order to reduce its capital expenses. At some point the density of transistors (x number of transistors per cubic centimeter) will stabilize. In May, 2015 Gordon Moore himself said the Law won’t last forever. But he also said it would work for five or ten more years if good engineering was applied. He hoped the industry would not hit a dead end.

The author's caveat: It is extremely hard to see more than 10 years into the technical future. For fifty years, many people have predicted the end of Moore's Law - and all of them were wrong! So there is a real possibility that new technologies will find their way into the semiconductor industry that we can not foresee right now. So stay tuned.  Top

Beyond Moore's Law

The transistor is really just a switch, its either on or off. Today's transistors made from silicon were an excellent choice for a switch. Now however, we are approaching the end of the silicon switch that we have known for the past 50 years. We need a new type of switch to keep Moore's Law going. Research labs in several universities and companies are looking at various alternatives for the next generation of switches. One of the most promising avenues for extending today’s chip technology is the use of compound semiconductors. Scientists are exploring the idea of using indium gallium arsenide in combination with silicon. The goal is to increase the performance of the chips, decrease energy consumption, and not have to shrink the size too much.

Carbon Nanotube Wafer

Further out into the future, 2020 and beyond, carbon nanotubes (CNTs) have the potential to greatly improve the performance of transistors. By using this atomic scale material as a switch, devices could be made smaller and electrons could move 10 times faster than conventional semiconductor materials. See the CNT wafer image to the left and the string of carbon (graphene) below.

In September, 2013 researchers at Stanford University announced that they had built a working computer based 100% on carbon nanotubes. The nanotube processor was made up of 178 transistors, each of which contained carbon nanotubes that were about 10 to 200 nanometer long.

The computer was rather slow and simple, equivalent to a 1971 Intel 4004, but it was the first hard evidence that nanotubes could be used to make computers. Read the Stanford Press Release.

 

 

Nanotube Computer

The Stanford research has demonstrated that individual carbon nanotube transistors, smaller than 10 nanometers, are faster and more energy efficient than those made of any other material including silicon.

Theoretical work has also suggested that a carbon nanotube computer would be an order of magnitude more energy efficient than the best silicon computers. Further research suggests that carbon nanotube computers might run blazingly fast without heating up.

However, working with carbon nanotubes is a huge challenge. They are typically grown in a way that leaves them in a tangled mess, and about a third of the tubes turn out to be metallic causing short circuits.

A voltage is applied to turn all of the nanotubes on the chip to “off.” Then a large current is pulsed through the chip. The metallic ones heat up, oxidize, and disintegrate. All of the nanotube processes can eventually be done on standard semiconductor equipment that is currently used to make silicon chips. In that sense, the process is scalable.  Top

 

Information Processing (IP) Is Changing

Digital Warehouse

With the rise of cloud computing, the emphasis on the speed of the processor in desktop and laptop computers is no longer of central importance. The main unit of computation is no longer the local processor, but the rack of servers in the data center. See the data center warehouse photo to the left. The question becomes not how many transistors can be squeezed onto a chip, but how many can be fit into a warehouse.

Cloud warehouses use an amazing amount of electric power which ultimately gets transformed into just hot air. Data centers consume 2% of all the electricity produced world wide.

Warehouses full of servers used for cloud processing generate a ton of heat. So much so, owners are willing to give up some processing speed in order to save on power. The computer power specification is the overriding system factor.

Huge warehouse air conditioning systems can only remove a specified amount of heat. When that maximum heat amount is divided by the number of computers, it signifies the maximum power output from each computer which overrides all other specs.  Top

Field-Programmable Gate Arrays (FPGAs)

Catapult

Microsoft, one of the major providers of cloud-computing services, is venturing into the hardware business. In 2014 it announced a new device called Catapult that uses FPGAs, the configurations of which can be reshaped at will in the field. See the photo to the left of a Catapult at the University of Texas in Austin.

"FPGAs offer a useful compromise between specialization and flexibility", says Dr Burger, who led the team that developed Catapult. "The idea is to have programmable hardware alongside programmable software." When one task is finished, an FPGA can be reconfigured for another job in less than a second.

Catapult is already in use with Bing, Microsoft’s search engine, and the company says this has doubled the number of queries a server can process in a given time. FPGAs excel when one specific algorithm has to be applied over and over again to torrents of data. One idea is to use Catapult to encrypt data flowing between computers to keep them secure. Another possibility is to put it to work on voice and image recognition jobs for cloud-connected smartphones.

In June, 2015, Intel announced it would acquire Altera, who provided the FPGA chips for Catapult, for $16.7 billion and completed the acquisition in December, 2015.

Other Factors.  If consumers continue to purchase devices like tablets, smartphones, watches, etc. microprocessor manufacturers will have less of an incentive to meet the expectations of Moore's Law. If there is only a small market for ultra-powerful processors, then we might have hit the economic barrier that could also bring an end to Moore's Law because of the extreme costs of new technology fabs.

Top