Kevin Fogarty pooh-poohs the recent moves to fill racks with hundreds of tiny servers instead of virtual machines running in ever larger brute-force servers.
Indeed, these are two ways of doing the same things, adding cores/processes to the rack getting more throughput/$ or /watt. Unfortunately, Fogarty does not understand scaling.
Scaling is a topic I studied in physics 40 years ago. Consider a cube. Its surface area is proportional to the square of the length of an edge. Its volume/mass is proportional to the cube of the length of an edge. Whether you add cores to a CPU or make a CPU smaller you get more computing power in a given area of chip. By making chips smaller, however, you get way more throughput because each chip can transmit at the same time. More cores on each chip does not scale as well because there is a bottleneck in bandwidth to/from the chip. It’s like having one or four gigabit/s NICs on a server. They are the limiting factor in throughput. With the micro servers, you can have far more throughput in a single box than with the huge multi-core chips. No one is likely to have 128 cores in a chip any time soon but we can easily put 1024 cores in a rack full of microservers and they will have far more throughput.
When I design a terminal server, I always look at the bottlenecks. If your motherboard can move 20 gB/s and your CPU can move 8 gB/s the number of CPUs is the bottleneck, not the power of a CPU. gigabit/s NICs are pretty standard these days. If you want a powerful server, we should be looking at microservers and 10gb/s NICs soon. A powerful CPU is fun/useful but if it is spinning its wheels in a box limited in throughput, that power is a total waste. Have I mentioned throughput/watt? It’s no contest. For example, my Beast uses a 95watt CPU but it is limited to a few gigabits/s throughput to the network. Each microserver might run on 10watts and have gigabits/s throughput. The microserver has 3 times as much throughput per watt.
For a database, througput to storage may be more important than throughput to the network, in which case macroservers will continue to make sense, but there are many more file servers than data servers. Microservers make a lot of sense and ARM will be king of them because the chips are so much smaller than x86.
Fogarty has a point that expanding virtual servers is easier through Moore’s Law but Moore’s Law works for microservers too. One can double the number of microservers in a rack through one step or Moore’s Law. In the case of a macroserver, one could change a CPU in a socket whereas for microservers, one might need to change a small motherboard. Eventually, those tasks will be nearly identical as the microservers and their local storage will be made smaller.