After several months of informal meetings and ad-hoc contact, we finally visited memory manufacturer Micron in Boise, ID earlier this month. Our large group of 7 students and 2 faculty members made for a fun road trip through stretches of torrential rain and awesome homemade pie at Connor’s Cafe in Burley (I would definitely recommend you stop by if you’re ever on I-84 in Idaho!). In a 3+ hour presentation and discussion session, we discussed all of the memory research happening here at our group in Utah, including both published work and current work-in-progress. We wanted feedback on what the memory industry’s constraints were, and the feasibility of implementing some of our ideas, and here are some of the things we heard, in no particular order:
1. Density, density, density: That density and cost/bit are everything in the memory industry is well known in memory research circles, but we really had no idea how much! We were told that on commodity parts, a 0.01% area penalty “might be considered”, 0.1% would require “substantial” performance gains, and anything over 0.5% was simply not happening, no matter what (these numbers are meant to be illustrative, of course, and not precise guidelines). For more specialized parts for niche markets, on the other hand, slightly larger area penalties could be considered as part of the overall value package. Take away: Focus on the high-end specialized segment, say the next big supercomputer, where cost is a smaller concern. The commodity segment has been squeezed to death, and you’re probably not going to get anywhere trying to change anything.
2. Be afraid of the memory chip: Very afraid! This is related to the previous point. The actual array layout has been super-ultra-optimized by people who dream about this stuff, and even the slightest change you propose is likely going to mess with this beyond repair. Stay away! Also, lots of smart people care very deeply about this narrow field, and have been working on this particular aspect for a long time. Anything you think of, they’ve probably considered before (but not published!). They would be “astounded” to hear something really novel.
3. Work at the system-level: There are infinite possibilities in terms of workload-based studies, altering data placement, data migration, row-buffer management, activity throttling, etc. These are less likely to provide dramatic improvements, but are low-effort, and are more likely to have an impact in terms of actually being implemented, since they are less invasive. It is unlikely that the industry has studied all the possibilities exhaustively, and you’re much more likely to come up with something novel.
4. Build it and they will come. NOT. If you really feel there is a case to be made to radically change DRAM architecture, approach the guys that will actually buy and use the parts - the system manufacturers. If they care enough, they can ask for it, and if they’re willing to pay for it, Micron will build it. It cannot be driven from the other end, no matter what, the margins are simply too small, and the whole system is setup to incentivize maximum capacity at least cost.
5. The Universal Memory Myth: Micron has been researching PCM for over a decade now, and are only now kinda sorta ready-ish to release a product into the market. Their excitement about the maturity of the technology, and it’s ability to summarily replace all memory in the system, is far far lower than that of the academic community. It is nowhere close to reaching the capacity and cost point of DRAM, which is, therefore, not likely to die any time soon.