Each release is accompanied by continuous improvements and incremental upgrades. But the processor is not the progress of any industry in the long run.
The transition from the vacuum tube to the transistor was very large. The transition from individual components to integrated circuits is huge. After that, there were no other similar paradigms at that level.
Yes, transistors are small, chips are fast, and their efficiency is hundreds of times greater.
This is the fourth and final installment in our CPU design series, providing an overview of computer processor design and manufacturing.
Starting from top to bottom, we see how computer code is compiled into assembly language and how to convert it into binary instructions that the CPU can interpret.
Let’s see how to create a processor and some processing instructions. Then we look at the structures that make up the CPU.
We will see how these structures are built and how billions of transistors work together within the processor.
Let us see how the processor is made from raw silicon. We learned the basics of semiconductors and what is actually inside the chips.
Go to the fourth part, because companies do not disclose research or details on current technology, it is difficult to know what is inside the CPU of your computer. However, what we can do is for the current research and the industry that is being worked on.
One of the most well known productions in the processor industry is Moore’s Law. This suggests that the number of transistors in the chip almost doubles every 18 months.
This has been true for a long time. But gradually the transistors are getting smaller to the extent of what physics will allow. Without developing new technology, we need to find ways to optimize the future.
A direct result of this description is that companies are beginning to increase the number of cores on frequency to improve performance.
This is why we see octa-core processors in the mainstream, rather than 10GHz dual-core chips. There is no room for more growth than adding more cores.
On a completely different note, quantum computing is an area that promises plenty of room for future development. I am not an expert on this and since this technology is still being built there are not many real “experts”
To dispel any myth, a quantum computer is not something that will give you 1000fps in real life such as rendering or anything like that. For now, the main advantage of quantum computers is that it allows for more advanced algorithms that were not possible before.
In traditional computers, transistors are open or closed, representing 0 or 1. A quantum computer can be superimposed, which can mean both bits 0 and 1 at the same time.
With this new capability, computer scientists can develop new computational methods and will be able to solve problems that we do not yet have computational capabilities.
Quantum computers are not very fast. Rather, it is a new type of calculation that will allow us to solve different types of problems.
The technology for this is still a decade or two away from the mainstream, so now what trends are we starting to see in actual processors? There are many active research areas. But I am going to talk about some of them, which in my opinion have the most impact.
The increasing trend we are affected by is the use of different computers. It is a method of combining many different computational elements in a system. Most of us benefit from this in the form of dedicated GPUs on our systems.
The CPU is highly customizable and can perform a wide range of calculations with a reasonable speed. On the other hand, GPUs are specifically designed to perform graphics calculations such as matrix multiplication. This is very good and such instruction gives orders of magnitude faster than CPU.
By transferring some graphics calculations from the CPU to the GPU, we can speed up throughput. It is easy for every programmer to make software by tweaking the algorithm. But hardware optimization is more difficult.
But the GPU is not the only area where accelerators become common. Most smartphones have dozens of hardware accelerators that are designed to speed up specific tasks.
This model of computation is called C of Accelerator and examples are coding processors, image processors, machine learning accelerators, video encoders / decoders, biometric processors, and more.