Server Buying Decisions: Memory
by Johan De Gelas on December 19, 2013 10:00 AM EST- Posted in
- Enterprise
- Memory
- IT Computing
- Cloud Computing
- server
Generally speaking, LRDIMMs are a lot more attractive than their quad ranked RDIMM counterparts with the same capacity. Due to the capacitive load of memory chips on the signal integrity of a memory channel, the clock speed and the number of chips in a channel are limited. To make this more clear, we described the relation between DPC (DIMMs Per Channel), CPU (Sandy Bridge and Ivy Bridge), DIMM type, and DIMM clock speed in the following table. We based this table on the technical server manuals and recommendations of HP, Dell, and Cisco. Low voltage DDR3 works at 1.35V, "normal" DDR3 DIMMs work at 1.5V.
Memory type: | 2DPC (SB) | 2DPC (IVB) | 3DPC (SB) | 3DPC (IVB) |
Dual Rank RDIMM - 1600 | 1600 | 1600 | 1066 | 1066/1333 (*) |
Dual Rank RDIMM - 1866 | 1600 | 1866 | 1066 | 1066/1333 (*) |
Quad Rank RDIMM - 1333 | 1333 | 1333 | N/A | N/A |
LRDIMM - 1866 | 1600 | 1866 | 1333 | 1333 |
LV 16GB RDIMM - 1333 (1.35V) | 1333 | 1333 | N/A | N/A |
LV 16GB LRDIMM - 1600 (1.35V) | 1600 | 1600 | 1333 | 1333 |
(*) Some servers support 1333 MHz, others limit speed to 1066 MHz
The new Ivy Bridge CPU supports 1866 MHz DIMMs—both LRDIMMs and RDIMMS—up to 2DPC. The load reduced DIMMs support up to 3DPC at 1333 MHz. In most servers, RDIMMs are limited to 1066 MHz at 3DPC. However, the main advantage of LRDIMMs is still capacity: you get twice as much capacity at 1866 MHz. Dual ranked RDIMMs are limited to 16GB while LRDIMMs support 32GB with the same load. 64GB LRDIMMs are now available, but currently (Q4 2013) few servers seem to support them. Notice also that only LRDIMMs support Low Power DIMMs at 3 DPC.
The quad ranked 32GB RDIMMs support only 2DPC and are limited to 1333 MHz. With 40% more speed at 2DPC and the same capacity, and 50% more capacity (3DPC) in your server, the LRDIMMs are simply a vastly superior offering at the same cost. So we can safely forget about quad ranked RDIMMs.
27 Comments
View All Comments
slideruler - Thursday, December 19, 2013 - link
Am I the only one who's concern with DDR4 in our future?Given that it's one-to-one we'll lose the ability to stuff our motherboards with cheap sticks to get to "reasonable" (>=128gig) amount of RAM... :(
just4U - Thursday, December 19, 2013 - link
You really shouldn't need more than 640kb.... :Djust4U - Thursday, December 19, 2013 - link
seriously though .. DDR3 prices have been going up. as near as I can tell their approximately 2.3X the cost of what they once were. Memory makers are doing the semi-happy dance these days and likely looking forward to the 5x pricing schemes of yesteryear.MrSpadge - Friday, December 20, 2013 - link
They have to come up with something better than "1 DIMM per channel using the same amount of memory controllers" for servers.theUsualBlah - Thursday, December 19, 2013 - link
the -Ofast flag for Open64 will relax ansi and ieee rules for calculations, whereas the GCC flags won't do that.maybe thats the reason Open64 is faster.
JohanAnandtech - Friday, December 20, 2013 - link
Interesting comment. I ran with gcc, Opencc with O2, O3 and Ofast. If the gcc binary is 100%, I get 110% with Opencc (-O2), 130% (-O3) and the same with Ofast.theUsualBlah - Friday, December 20, 2013 - link
hmm, thats very interesting.i am guessing Open64 might be producing better code (atleast) when it comes to memory operations. i gave up on Open64 a while back and maybe i should try it out again.
thanks!
GarethMojo - Friday, December 20, 2013 - link
The article is interesting, but alone it doesn't justify the expense for high-capacity LRDIMMs in a server. As server professionals, our goal is usually to maximise performance / cost for a specific role. In this example, I can't imagine that better performance (at a dramatically lower cost) would not be obtained by upgrading the storage pool instead. I'd love to see a comparison of increasing memory sizes vs adding more SSD caching, or combinations thereof.JlHADJOE - Friday, December 20, 2013 - link
Depends on the size of your data set as well, I'd guess, and whether or not you can fit the entire thing in memory.If you can, and considering RAM is still orders of magnitude faster than SSDs I imagine memory still wins out in terms of overall performance. Too large to fit in a reasonable amount of RAM and yes, SSD caching would possibly be more cost effective.
MrSpadge - Friday, December 20, 2013 - link
One could argue that the storage optimization would be done for both memory configurations.