With Sun Microsystems announcing this week a successful Solaris boot on a 16-core UltraSPARC-family processor, it’s time to take a look at the fine print of Moore’s Law.
Gordon Moore’s surprisingly durable 1965 forecast is sometimes informally stated in terms of improving microchip price/performance ratios, distorting its actual prediction: that optimal per-transistor cost will be found on chips of ever-rising device count.
When a single-user machine could usefully employ all the capability of an optimally complex processor, this was a difference that made no difference. When a single-core Pentium N+1 ran twice as fast as a single-core Pentium N, your PC pretty much did everything twice as quickly as it did before. Peachy.
When rising complexity goes into providing multiple cores on one die, rather than providing more speed from any single core, the single-user machine doesn’t get a linear speedup except in specific highly parallelizable tasks like image processing. Last year, Intel Senior VP Pat Gelsinger told a Silicon Valley audience that Microsoft’s Bill Gates "was just in disbelief" when he grasped the imminent diversion of processor talent from clock-speed increase to core-count growth: he quoted Gates as saying, "We can’t write software to keep up with that."
What this means is that the price/performance ratio of shared machines, rather than single-user machines, is quickly becoming the realm where Moore’s Law matters most. Delivering compute-intensive function through On Demand models, doing the computing on the back end and only rendering the results and handling the user interaction on the front end, is already a cost-effective choice: its economic advantage will soon be compelling, and that gap will just keep getting wider from now on.