Thoughts on Moore’s Law

It has now been 53 years since Gordon E. Moore published a paper called “Cramming More Components onto Integrated Circuits”. [1] In it he stated that

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly, over the short term this rate can be expected to continue, if not to increase.” [1]

On the next page, he presents a graph showing a growing trend in the number of components per integrated function. The Graph is backed up by a six-year history with a total of 5 data points. One could state with certainty that Moore was unaware of the impact his theory could have on the world. His theory became reality. Over the next 50 years, one could observe a sustained exponential increase in CPU performance. [2] Until recently the business plans of billion-dollar companies like Intel followed that simple theory making it a self-fulfilling prophecy. [3] Without being overly philosophical, this once again shows what is possible if enough people believe it is possible.

All of this is so impressive because logic tells us that constant exponential growth is something very unusual, you could even say fictitious. Even further, I don’t think I’m shaking up the field of future studies by saying that exponential increase in performance of CPUs as we know them can impossibly last forever. And thus, it was early 2016 when the big chip manufacturers announced that they will no longer be guided by Moore’s law when planning their future. [4] Heat was a problem and the fact that chip structures are supposed to shrink to 2 to 3 nm which is only around 10 atoms. In this scale, the Heisenberg uncertainty principle becomes a problem. [5]

Moore’s Law is dead, long lives Moore’s law.

Although Moore’s law has driven chip manufacturers in the past, not being able to fulfil Moore’s Law in the sense of continuously shrinking the chips will not stop the rapid development from going on.

While the first iPhone was painfully slow and with every new version of iOS old iPhones used to become unusable, computational power is no longer such an issue. While Apple does show statistics of their chip performance going up at every iPhone Keynote, everyday iOS-applications won’t noticeably benefit from this. We no longer need to speed up every processor in every device, but we must look at the respective field of application and develop suitable solutions for each.

Even though CPU performance might not keep on growing at that pace, GPU performance even exceeded Moore’s Law in the past years with huge impacts on the AI/ML Industry. [6] And then there is quantum computing, which might become a game changer in other fields of application.

With this being said, we shouldn’t drive ourselves crazy to fit as many transistors as possible on one square cm just to keep up with an old theory. Because that alone will not benefit any of us.

 

[1] http://www.monolithic3d.com/uploads/6/0/5/5/6055488/gordon_moore_1965_article.pdf

[2] https://www.wired.com/2015/04/50-years-moores-law-still-pushes-tech-double

[3] http://larsjaeger.ch/?p=1200&lang=de

[4] https://www.fool.com/investing/2017/03/07/intel-corporation-stops-following-moores-law.aspx

[5] https://zukunft2050.wordpress.com/2016/05/31/das-ende-des-mooreschen-gesetzes/

[6] https://spectrum.ieee.org/view-from-the-valley/computing/hardware/move-over-moores-law-make-way-for-huangs-law

0