Infinitely scalable CPU
I’ve been thinking a lot about a software concept that I have never seen but that I believe could change computing significantly. I am interested in SaaS, cloud computing, and virtualization but what really intrigues me is the infrastructure that powers these services. Both the software components and the hardware components. Everyone who has built their own computer (sadly, a dying breed) has wanted to increase the performance of their machine at some time. Sure, you can put in more RAM and switch to SSD drives but ultimately, the CPU has to be swapped out which is rarely a simple switch.
Software engineers face a similar problem. Some tasks work better on single core CPU. Older software, for example, is frequently not optimized for a multi-core, multi-threaded approach. This is not an insignificant portion of the market – lots of companies rely on decade old software for critical services. Besides, single threaded software is easier to program.
Bluntly put, some times it is easier and cheaper to throw hardware at a software problem. Instead of reengineering old code or trying to optimize existing production code, putting it on a faster machine is a cheaper fix.
My idea is this – what if you could magically increase the computing power of your CPU. I do not mean up to 100% of a host computer’s resources, I mean one virtual computer that can span dozens of physical machines. Keep reading; I propose a non-magic solution to this problem.
There are two key technologies that exist today that are widely implemented and key to my solution.
Virtualization – this is a technology that allows a physical machine to be split up in to several virtual machines. Virtualization runs several concurrent instances of an operating system through a hypervisor. The hypervisor takes a virtual machine’s requests, translates the memory locations, network, IO, etc., and passes them to the physical machine. This allows fast execution of code while partitioning virtual machines from interfering with each other.
This website is hosted on a virtual machine at Media Temple and the hypervisor is provided by Parallels. VMWare, Oracle, and Microsoft also offer virtual machine solutions.
The other major technology is Emulation. Emulators allow one piece of hardware to act like another. For example, I have software on my Mac (Intel Core i7 CPU) to emulate a Nintendo Entertainment System (Ricoh 2A03 CPU). The emulator takes commands that were originally written for the Ricoh CPU and translates them in to instructions that my Intel CPU can understand. The result is that my computer can run code that was intended for another machine. The tradeoff for this flexibility is that there is a performance hit.
To the best of my knowledge, no one has combined the two technologies. Emulators provide a virtual CPU. Virtual Machines allow native code execution.
What if these two were combined to create one virtual machine that could span different hardware? The guest machine would simply “see” one CPU much like an emulator does but if the underlying hardware (“bare metal” in industry speak) was the same architecture, then no major machine code translation would be needed.
This combination would have a huge impact on legacy enterprise software. Instead of paying software engineers to rewrite huge applications, all a company would need to do would be to buy a few more host servers to boost capacity. It may not be a “fix” but it would extend the need to update old code.
Now, my solution may not be without challenges. For example, coordinating commands across dozens of physical processors would be a challenge. If a host machine went down, it would likely take down everything. Communication between host computer interconnects would require solutions made for supercomputers (but on a much smaller scale) but things like InfiniBand may suffice.
This is still a work in progress but I think there is something here…