It is well-known that much of our data storage and computation will happen within datacenters in the coming decade. Energy efficiency in datacenters is a national priority and the memory system is one of the significant contributors to system energy. Most reports indicate that the memory system's contribution to overall energy is in the 20-30% range. I expect memory energy to be a hot topic in the coming years. I'm hoping that this post can serve as a guideline for those that want to exploit the energy-cost trade-off for memory system design.
For several years, the DRAM industry focused almost exclusively on the design of high-density DRAM chips. Seemingly, customers only cared to optimize cost per bit... or more precisely, purchase cost per bit. In recent times, the operating cost per bit is also becoming significant. In fact, DRAM industry optimizations to reduce purchase cost per bit often end up increasing the operating cost per bit. The time may finally be right to start building mainstream DRAM chips that are not optimized for density, but for energy. At least that's an argument we make in our ISCA 2010 paper and a similar sentiment has been echoed in a IEEE Micro 2010 paper by Cooper-Balis and Jacob. If DRAM vendors were to take this route, customers may have to pay a higher purchase price for their memory chips, but they'd end up paying less for energy and cooling costs. The result should be a lower total cost of ownership.
However, it may take a fair bit of effort to convince memory producers and consumers that this is a worthwhile approach. Some memory vendors have apparently seen the light and begun to tout the energy efficiency of their products: Samsung's Green Memory and Samsung's new DDR4 product. While such an idea has often been scoffed at by memory designers that have spent years optimizing their designs for density, the hope is that economic realities will eventually set in.
A perfect analogy is the light bulb industry. Customers are willing to pay a higher purchase price for energy-efficient light bulbs that hopefully save them operating costs, compared to the well-known and commoditized incandescent light bulb. If the Average Joe can do the math, so can memory vendors. Surely Micron must have noticed the connection. After all, they make memory chips and lighting systems!! :-)
So what is the math? Some of my collaborators have talked to DRAM insiders and while our ideas have received some traction, we are warned: Do not let your innovations add a 2% area overhead!! That seems like an awfully small margin to play with. Here's our own math.
First, if we check out various configurations for customizable Dell servers, we can quickly compute that DRAM memory sells for roughly $50/GB today.
Second, how much energy does memory consume in a year? This varies based on several assumptions. Consider the following data points. In a 2003 IBM paper, two servers are described, one with 16 GB and another with 128 GB of DRAM main memory. Memory power contributes roughly 20% (318 W) of total power in the former and 40% (1,223 W) of total power in the latter. This leads to an estimate of 20 W/GB and 10 W/GB respectively for the two systems. If we assume that this value will halve every two years (I don't have a good reference for this guesstimate, but it seems to agree with some other data points), we get an estimate of 1.25-2.5 W/GB today. In a 2006 talk, Laudon describes a Sun server that dissipates approximately 64 W of power for 16 GB of memory. With scaling considered, that amounts to a power of 1 W/GB today. HP's server power calculator estimates that the power per GB for various configurations of a high-end 2009 785G5 server system ranges between 4.75 W/GB and 0.68 W/GB. Based on these data points, let us use 1 W/GB as a representative estimate for DRAM operational energy.
It is common to assume that power delivery inefficiencies and cooling overheads will double the energy needs. If we assume that the cost for energy is $0.80/Watt-year, we finally estimate that it costs $6.40 to keep one GB operational over a DIMM's four-year lifetime. Based on this number and the $50/GB purchase price number, we can estimate that if we were to halve memory energy, we can reduce total cost of ownership even if the purchase cost went up to $53.20/GB. Given that cost increases more than linearly with an increase in chip area, this roughly translates to a chip area overhead of 5%.
This brings us to the 2X-5% challenge: if memory designers set out to reduce memory energy by 2X, they must ensure that the incurred area overhead per DRAM chip is less than 5%. Alternatively stated, for a 2X memory energy reduction, the increase in purchase cost must be under 6%. Not a big margin, but certainly more generous than we were warned about. The margin may be even higher for some systems or if some of the above assumptions were altered: for example, if the average DIMM lasted longer than 4 years (my Dell desktop is still going strong after 7.5 years). This is an approximate model and we welcome refinements.