The evolution of the CPU 7 years ago

The Center of Excellence: The staggering pace at which the main component of your favorite gadgets has progressed through the decades.

From the advent of the computing age, the central processing unit (CPU) has always been the star attraction of any new device unveiled to the masses. The description of that new laptop you purchased from Price-Smart or Amazon more than likely contained the words ‘Intel’ or ‘AMD’ in the very first line. From the Altair 8800, widely considered to be the spark of modern home computing, which boasted an Intel 8080 running at 2MHz, to the latest and greatest iPhone 7 and Samsung S8, the CPU has always been the most powerful and most essential component. When building any new system, the general rule of thumb is to choose the CPU first, and then model the rest of the system to take advantage of every last bit of its available power. This blog post seeks to give some insight into how the mighty processor has evolved over the decades.

 

How it’s made

The modern processor is built using the same material as its ancestors from the 1970’s – Silicon. Silicon can be considered as the main component which makes up a transistor, and the transistor is the fundamental building block of any processor. Transistors can best be described as ‘switches’ which are turned on and off by an electric current. This ‘on’ and ‘off’ function essentially represents the 1’s and 0’s that make up the language which computers speak (binary). Transistors can be switched on and off many times per second, millions (MHz), even billions (GHz) of times per second. The speed at which the transistors in a circuit can be switched on and off reliably would ultimately determine the final clock speed (frequency) of the chip. Transistors can be arranged in different ways to form logic gates. These logic gates can then be arranged to perform specific functions within a processor.

varun 1

Image Credit: kitguru.net

The Exponential Increase in Power

Ever heard of Moore’s law? Moore’s law states that “the number of transistors in a dense integrated circuit doubles approximately every two years”. This basically means that engineers are able to cram more and more of the building blocks (transistors) into the same amount of space as time goes on. More transistors mean more units available to execute instructions, which equates to a faster processor with more features. Another factor which greatly affects how quickly a processor can decode that HD video stream, or brute-force crack your neighbor’s WiFi password (we recommend asking them nicely first), is the clock speed. Early processors had a clock speed of just a few KHz (thousands of on-off cycles per second), while the most modern processors operate at GHz speeds (Billions of cycles per second).

 

Processor speed has historically been measured in terms of ‘Instructions Per Second’. New processors are so fast that they can be measured in orders of Millions or Billions of instructions per second. The following graphs show how mainstream processor speeds have climbed over the decades.

varun 2

The ascension of processor speed has been steady for the first 3.5 decades since silicon became the main ingredient in its construction. In the mid 80’s there was a huge spike from 6 MIPS to 25 MIPS within a year. This was owed to the Texas Instruments TMS320C25 chip, which has many variants still under development and in use today, some 30 years later.

varun 3

The 1990’s started off with a bang, almost doubling the MIPS rating of the 80’s, and then increasing more than 40 times over before the decade had passed. The legendary Pentium III cartridge-type processor was responsible for this huge performance leap.

varun 4

Intel Pentium III (Katmai) – 500 MHz, 512 KB L2 Cache, FSB 100 MHz

Image Credit: Wikipedia.org

varun 5

At the turn of the 21st century, processor speeds again rose sharply. With the introduction of multi-core processing in 2005 by AMD, it could have only gone uphill from there. The decade closed off with great fanfare with the introduction of Intel’s ‘core’ series. The quad-core first generation Intel Core i7 920 takes the top spot.

varun 6

The current decade has also seen leaps and bounds in terms of raw processing power, along with a greater feature set of modern CPUs. As of 2011, no longer is a discrete graphics card needed for everyday computing. Processors are now so powerful at this point that all functions of the northbridge now reside on the CPU die. The northbridge has historically formed the interconnect between graphics, RAM and the processor as well connecting to the southbridge which provides access to slower components such as PCI x1 lanes, hard drive ports, Ethernet and legacy interconnects.

 

Today’s CPU Architecture

While traditional advancements in CPU architecture have been to cram more and more transistors onto a finite space, eventually that method would have to be laid to rest as the maximum density for silicon is achieved. At the early part of the decade (2011-2012), Intel introduced its 2nd generation of core series processors, code named Sandy Bridge. These processors featured a 22nm (nano-meter) manufacturing process. This process created transistors so thin that it was possible for electrons to pass through a fully closed transistor gate. This is a huge problem as it would mean that 22nm was the wall for transistor miniaturization. Intel however would not be beaten, and instead designed a new type of 3-dimensional transistor gate that can withstand even more miniaturization. Today we’ve got production processors down to a 14nm process and looking forward to moving towards 10nm.

To combat the issue of being close to the maximum achievable density of silicon, manufacturers are now adding more and more cores to their processors.  The Intel Core i7-6950X comes with 10 processor cores, while the AMD RYZEN has a rumored 16-core beast in the making. Advancements in multi-threaded applications and games have made high core counts a viable alternative to miniaturization of a single core.

Clock speeds have also suffered due to extreme miniaturization. For the past 5 years, clock speeds on mainstream processors have been stuck at around 3.7 GHz – 4 GHz. Higher clock speeds mean more energy use and therefore more heat buildup. A higher core count also helps with the problem of stagnated clock speeds. Manufacturers are also writing enhanced instruction sets which greatly boost the performance of CPUs without having to modify their physical structure.

A rise in demand for mobile gadgets have also played a part in shaping the path of today’s processor. Power consumption and heat production are two major factors in a mobile landscape. Raw power isn’t so much desired as longer battery life. Mobile devices are getting thinner and lighter, which means the processor has to adapt to a smaller battery while still having the ability to open 30+ Facebook tabs on Google Chrome and not choke. Manufacturers are now making fully-featured processors with a power consumption of only 7W.

Intel and AMD may be the kings of the desktop and laptop world, but when it comes to tablets and smartphones, they’re sorely beat by manufacturers like Qualcomm and ARM. These manufacturers power handheld gaming consoles, and even the latest generation of fancy self-driving cars, making them the current true kings of the mobile world.

Who knows where processor technology would head to next? I suspect that silicon may continue being the build material of choice for the next 10-20 years, until a new material is able to rise to the table that allows transistors to be miniaturized to only a few atoms wide. Maybe ARM and Qualcomm would become the new Intel and AMD, while the desktop computer ultimately fades out of everyday use and is replaced by always-on VR and other fully mobile devices.

 

varun 7

Commenting is Disabled on The evolution of the CPU