27.02.2013 Views

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Engineering white paper, 2 nd Edition<br />

<strong>Configuring</strong> <strong>and</strong> <strong>using</strong> <strong>DDR3</strong> <strong>memory</strong> <strong>with</strong> <strong>HP</strong><br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>Servers</strong><br />

Best Practice Guidelines for <strong>ProLiant</strong> servers <strong>with</strong> Intel® Xeon® processors<br />

Table of contents<br />

Introduction 3<br />

Overview of <strong>DDR3</strong> <strong>memory</strong> technology 3<br />

Basics of <strong>DDR3</strong> <strong>memory</strong> technology 3<br />

Basics of DIMMs 4<br />

<strong>DDR3</strong> DIMM types 5<br />

<strong>HP</strong> SmartMemory 6<br />

<strong>HP</strong> Advanced Memory Error Detection<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for<br />

servers <strong>with</strong> Intel® Xeon® E5-2600<br />

6<br />

series processors 6<br />

Overview<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the Intel®<br />

6<br />

Xeon® E5-2600 series processors<br />

<strong>ProLiant</strong> <strong>Gen8</strong> Intel® Xeon® E5-2600<br />

7<br />

series processors<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for<br />

servers <strong>using</strong> Intel® Xeon® E5-2400<br />

7<br />

series processors 8<br />

Overview<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> Intel®<br />

8<br />

Xeon® E5-2400 series processors<br />

<strong>ProLiant</strong> <strong>Gen8</strong> Intel® Xeon® E5-2400<br />

9<br />

series processors<br />

<strong>DDR3</strong> DIMMs for <strong>ProLiant</strong> <strong>Gen8</strong><br />

9<br />

servers<br />

Populating <strong>memory</strong> in <strong>ProLiant</strong> <strong>Gen8</strong><br />

10<br />

servers<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> slot<br />

11<br />

configurations<br />

Population rules for <strong>ProLiant</strong> <strong>Gen8</strong><br />

11<br />

servers 11<br />

DIMM Population Order 12<br />

Memory system operating speeds 14


General population guidelines 14<br />

Optimizing <strong>memory</strong> configurations 15<br />

Optimizing for capacity 15<br />

Optimizing for performance<br />

Optimizing for lowest power<br />

15<br />

consumption 20<br />

Optimizing for Resiliency<br />

Underst<strong>and</strong>ing unbalanced <strong>memory</strong><br />

22<br />

configurations<br />

Memory configurations that are<br />

23<br />

unbalanced across channels<br />

Memory configurations that are<br />

23<br />

unbalanced across Processors 23<br />

BIOS Settings for <strong>memory</strong> 24<br />

Controlling Memory Speed 24<br />

Setting Memory Interleave 25<br />

For more information<br />

Appendix A - Sample Configurations for<br />

26<br />

2P <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

24 DIMM slot servers <strong>using</strong> Intel®<br />

27<br />

Xeon® E5-2600 processor series<br />

16 DIMM Slot <strong>Servers</strong> <strong>using</strong> Intel®<br />

27<br />

Xeon® E5-2600 series processors<br />

12 DIMM Slot <strong>Servers</strong> <strong>using</strong> Intel®<br />

28<br />

Xeon® E5-2400 series processors 29<br />

2


Introduction<br />

This paper provides an overview of the new <strong>DDR3</strong> <strong>memory</strong> <strong>and</strong> its use in the 2 socket <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the<br />

latest Intel® Xeon® E5-2600 series processor family. With the introduction of <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers, <strong>DDR3</strong> maximum<br />

operating speed is increasing <strong>and</strong> a new type of Load Reduced DIMM (LRDIMM) is being introduced. We are also<br />

introducing <strong>HP</strong> SmartMemory, which provides superior performance over 3 rd party <strong>memory</strong> in certain configurations.<br />

The new 2 socket <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers also feature advances in <strong>memory</strong> support. <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers based<br />

on the Intel® Xeon® E5-2600 series processor family support 4 separate <strong>memory</strong> channels per CPU <strong>and</strong> up to 24 DIMM<br />

slots– allowing larger <strong>memory</strong> configurations <strong>and</strong> improved <strong>memory</strong> performance. They also incorporate <strong>HP</strong> Advanced<br />

Memory Protection technology, which improves the prediction of critical <strong>memory</strong> error conditions.<br />

In addition to describing these improvements, this paper reviews the rules, best practices, <strong>and</strong> optimization strategies<br />

that should be used when installing <strong>DDR3</strong> <strong>memory</strong> on <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers.<br />

Overview of <strong>DDR3</strong> <strong>memory</strong> technology<br />

Basics of <strong>DDR3</strong> <strong>memory</strong> technology<br />

<strong>DDR3</strong>, the third-generation of DDR SDRAM technology, makes improvements in b<strong>and</strong>width <strong>and</strong> power consumption over<br />

DDR2. Additional improvements in <strong>DDR3</strong> yield up to 70% power savings versus DDR2 at the same speed, <strong>and</strong> 100%<br />

higher b<strong>and</strong>width over DDR2.<br />

<strong>DDR3</strong> Memory Technology<br />

<strong>DDR3</strong> DIMMs use the same 240-pin connector as DDR2 DIMMs, but the notch key is in a different position.<br />

To increase performance <strong>and</strong> reduce power consumption, <strong>DDR3</strong> incorporates several key enhancements:<br />

• St<strong>and</strong>ard <strong>DDR3</strong> DIMMs operate at 1.5V, compared to 1.8V for DDR-2 DIMMs. <strong>DDR3</strong> Low Voltage DIMMs operate at<br />

1.35V. For <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers, the majority of new <strong>DDR3</strong> DIMMs are Low Voltage. These <strong>HP</strong> SmartMemory<br />

DIMMs enable the same performance as st<strong>and</strong>ard 1.5V DIMMs while <strong>using</strong> up to 20% less power.<br />

• An 8-bit prefetch buffer stores more data before it is needed than the 4-bit buffer for DDR2<br />

• Fly-by topology (for the comm<strong>and</strong>s, addresses, control signals, <strong>and</strong> clocks) improves signal integrity by reducing the<br />

number of stubs <strong>and</strong> their length. The improved signal integrity, combined <strong>with</strong> “write leveling” technology, enables<br />

<strong>DDR3</strong> to operate at significant faster transfer rates than previous <strong>memory</strong> generations.<br />

• A thermal sensor integrated on the DIMM module signals the chipset to reduce <strong>memory</strong> traffic to the DIMM if its<br />

temperature exceeds a programmable critical trip point.<br />

<strong>DDR3</strong> Speeds<br />

The <strong>DDR3</strong> specification originally defined data rates of up to 1600 Mega transfers per second (MT/s), more than twice<br />

the rate of the fastest DDR2 <strong>memory</strong> speed (Table 2). <strong>ProLiant</strong> G6 <strong>and</strong> G7 servers support a maximum <strong>DDR3</strong> DIMM speed<br />

of 1333 MT/s. <strong>ProLiant</strong> <strong>Gen8</strong> servers use the new 1600 MT/s <strong>DDR3</strong> DIMMs as well as 1333 MT/s DIMMs.<br />

The <strong>DDR3</strong> specification has been extended to define additional <strong>memory</strong> speeds of 1866 MT/s <strong>and</strong> 2166 MT/s. <strong>ProLiant</strong><br />

<strong>Gen8</strong> servers currently support <strong>memory</strong> speeds up to 1600 MT/s. <strong>HP</strong> engineers have designed the <strong>ProLiant</strong> <strong>Gen8</strong><br />

platform architecture to run at <strong>memory</strong> speeds up to 1866 MT/s once processor chipsets <strong>and</strong> DIMMs that support this<br />

speed are available.<br />

3


Table 1. <strong>DDR3</strong> <strong>memory</strong> speeds<br />

4<br />

DIMM Label JEDEC Name Data Transfer Rate Maximum DIMM Throughput<br />

PC3 – 14900 <strong>DDR3</strong>-1866 1866 MT/s 14.9 GB/s<br />

PC3 – 12800 <strong>DDR3</strong>-1600 1600 MT/s 12.8GB/s<br />

PC3 – 10600 <strong>DDR3</strong>-1333 1333 MT/s 10.6 GB/s<br />

PC3 – 8500 <strong>DDR3</strong>-1066 1066 MT/s 8.5 GB/s<br />

PC3 – 6400 <strong>DDR3</strong>- 800 800 MT/s 6.4 GB/s<br />

Basics of DIMMs<br />

Before exploring the new technologies in <strong>DDR3</strong> DIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers, let’s quickly review some of the basics<br />

of DIMM technology.<br />

DRAM technology<br />

DIMMs are made up of DRAM chips that are grouped together. Each DRAM chip contains arrays of individual bit storage<br />

locations. A DRAM chip <strong>with</strong> one billion storage locations is called 1 Gigabit (1Gb) technology. Note the lower case b in<br />

Gb. Eight 1Gb chips ganged together will provide 1 GigaByte (1GB) of <strong>memory</strong>. Note the upper case B in GB.<br />

<strong>DDR3</strong> DIMMs are currently made up of 1Gb, 2Gb, <strong>and</strong> 4Gb DRAM chips. It is not possible to mix DRAM technologies on the<br />

same DIMM. <strong>DDR3</strong> does not support DIMMs made up of 512Mb DRAM chips.<br />

A DRAM chip may have 4 data I/O signals or 8 data I/O signals. These are called x4 or x8, pronounced “by four” or “by<br />

eight” respectively.<br />

Ranks<br />

A rank is a group of DRAM chips that are grouped together to provide 64 bits (8 Bytes) of data on the <strong>memory</strong> bus. All<br />

chips in a rank are controlled simultaneously by the same Chip Select, Address <strong>and</strong> Comm<strong>and</strong> signals. <strong>DDR3</strong> DIMMs are<br />

available in single-, dual- <strong>and</strong> quad-ranks (1, 2, <strong>and</strong> 4 ranks respectively.)<br />

Eight x8 DRAM chips or 16 x4 chips form a rank. DIMMs <strong>with</strong> 8 bits of Error Correction Code (ECC) use nine x8 chips <strong>and</strong> 18<br />

x4 chips for each rank.<br />

Speed<br />

Speed refers to the frequency of the <strong>memory</strong> clock. The <strong>memory</strong> subsystem uses a different clock than the processor<br />

cores, <strong>and</strong> the <strong>memory</strong> controllers use this clock to coordinate data transfers between the <strong>memory</strong> controller <strong>and</strong> the<br />

DIMMs. The actual speed at which this clock operates in a particular server depends on five factors:<br />

• Rated <strong>memory</strong> speed of the processor. Each Intel® Xeon® processor model supports a specific maximum <strong>memory</strong><br />

speed.<br />

• Rated <strong>memory</strong> speed of the DIMM. <strong>DDR3</strong> DIMMs can run at different speeds, often called frequencies. For <strong>ProLiant</strong><br />

<strong>Gen8</strong> servers, we offer two native speeds of <strong>DDR3</strong> <strong>memory</strong>: <strong>DDR3</strong>-1600 <strong>and</strong> <strong>DDR3</strong>-1333.<br />

• Number of ranks on the DIMM. Each rank on a <strong>memory</strong> channel adds one electrical load. As the electrical loads<br />

increase, the signal integrity degrades. To maintain the signal integrity the <strong>memory</strong> channel may run at a lower speed.<br />

• Number of DIMMs populated. The number of DIMMs attached to a <strong>memory</strong> controller also affects the loading <strong>and</strong><br />

signal integrity of the controller’s circuits. In order to maintain signal integrity, the <strong>memory</strong> controller may operate<br />

DIMMs at lower than their rated speed. In general, the more DIMMs that are populated, the lower the operational<br />

speed for the DIMMs.<br />

• BIOS settings. Enabling certain BIOS features can affect <strong>memory</strong> speed. For example, the ROM Based Setup Utility<br />

(RBSU) in <strong>HP</strong> <strong>ProLiant</strong> servers includes a user-selectable setting to force <strong>memory</strong> to run at a slower speed than the<br />

normally configured speed in order to save on power consumption. See the section on BIOS settings for details.


<strong>DDR3</strong> DIMM types<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers support four different DIMM types – Unbuffered <strong>with</strong> ECC Memory (UDIMMs), Registered Memory<br />

(RDIMMs), Load Reduced Memory (LRDIMMs), <strong>and</strong> HyperCloud Memory (HDIMMs). UDIMMs <strong>and</strong> RDIMMs are familiar from<br />

their use in both <strong>ProLiant</strong> G6 <strong>and</strong> G7 servers. However, LRDIMMs are a new class of DIMMs that work solely <strong>with</strong> the<br />

<strong>ProLiant</strong> <strong>Gen8</strong> server architecture. HyperCloud DIMMs are special purpose <strong>memory</strong> available only as a Factory Installed<br />

option. Each type of <strong>memory</strong> has its unique characteristics, <strong>and</strong> the type of <strong>memory</strong> you use may depend on the<br />

application requirements for your server.<br />

Unbuffered DIMMs<br />

UDIMMs represent the most basic type of <strong>memory</strong> module. With UDIMMs, all address <strong>and</strong> control signals, as well as the<br />

data lines, connect directly to the <strong>memory</strong> controller across the DIMM connector. UDIMMs offer the fastest <strong>memory</strong><br />

speeds, lowest latencies, <strong>and</strong> (relatively) low power consumption. However, they are limited in capacity. Unbuffered<br />

DIMMs <strong>with</strong> ECC are identified <strong>with</strong> an E suffix in the manufacturer’s module name (example PC3L-10600E). UDIMMs are<br />

applicable for systems needing the lowest <strong>memory</strong> latency at the lowest power at relatively low <strong>memory</strong> capacities.<br />

Registered DIMMs<br />

Registered DIMMs (RDIMMs) lessen direct electrical loading by having a register on the DIMM to buffer the Address <strong>and</strong><br />

Comm<strong>and</strong> signals between the DRAMs <strong>and</strong> the <strong>memory</strong> controller. This allows each <strong>memory</strong> channel to support up to<br />

three dual-rank DIMMs in <strong>ProLiant</strong> <strong>Gen8</strong> servers, increasing the amount of <strong>memory</strong> that a server can support. With<br />

RDIMMs, the partial buffering slightly increases both power consumption <strong>and</strong> <strong>memory</strong> latency.<br />

Load Reduced DIMMs<br />

Load Reduced DIMMs are available for the first time <strong>with</strong> the <strong>ProLiant</strong> <strong>Gen8</strong> servers. LRDIMMs use a <strong>memory</strong> buffer all<br />

<strong>memory</strong> signals <strong>and</strong> to perform rank multiplication. The use of rank multiplication allows <strong>ProLiant</strong> <strong>Gen8</strong> servers to<br />

support three quad-ranked DIMMs on a <strong>memory</strong> channel for the first time. You can use LRDIMMs to configure systems<br />

<strong>with</strong> the largest possible <strong>memory</strong> footprint. However, LRDIMMs use the most power <strong>and</strong> have the highest latencies for<br />

the same <strong>memory</strong> clock speeds.<br />

HyperCloud DIMMs<br />

HyperCloud DIMMs (HDIMMs) are a new DIMM type designed to support 3 DIMMs per channel running at 1333 MT/s.<br />

Because they use a different buffer architecture, HDIMMs will only operate at 3 DIMMs per channel <strong>and</strong> <strong>with</strong> all <strong>memory</strong><br />

channels populated. For <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers, HDIMMs are only available as a Factory Installed Option <strong>and</strong> solely for<br />

the Proliant 380p <strong>and</strong> the ProLaint 360p servers.<br />

Comparing DIMM Types<br />

Table 2 provides a quick comparison UDIMMs, RDIMMs, LRDIMMs <strong>and</strong> HDIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the 2P<br />

Intel architecture.<br />

Table 2. Comparison of UDIMMs, RDIMMS, LRDIMMs <strong>and</strong> HDIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

Feature UDIMM RDIMM LRDIMM HDIMM<br />

DIMM Sizes Available 2 GB, 4 GB, 8 GB 4 GB, 8 GB, 16 GB 32 GB 16 GB<br />

Low power version of DIMMs available Yes Yes Yes Yes<br />

Advanced ECC support Yes Yes Yes Yes<br />

Address parity No Yes Yes Yes<br />

Rank Sparing Yes Yes Yes Yes<br />

Lock-Step Mode Yes Yes Yes Yes<br />

Relative Cost Lower Higher Highest Higher<br />

Maximum capacity on a server <strong>with</strong> 16<br />

DIMM slots<br />

Maximum capacity on a server <strong>with</strong> 24<br />

DIMM slots<br />

128 GB 256 GB 512 GB N/A<br />

128 GB 384 GB 768 GB 384 GB only<br />

5


<strong>HP</strong> SmartMemory<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers introduce <strong>HP</strong> SmartMemory technology for <strong>DDR3</strong> <strong>memory</strong>. <strong>HP</strong> SmartMemory enables<br />

authentication of installed <strong>memory</strong>. This verifies whether DIMMs have passed our qualification <strong>and</strong> testing processes<br />

<strong>and</strong> determines if the <strong>memory</strong> has been optimized to run on <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers. Use of <strong>HP</strong> SmartMemory DIMMs<br />

enables extended performance <strong>and</strong> manageability features for the 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers. <strong>HP</strong> SmartMemory supports<br />

extended performance compared to third party <strong>memory</strong> for several DIMM types <strong>and</strong> configurations. Table 3 summarizes<br />

these performance extensions.<br />

Table 3. Extended performance for <strong>HP</strong> SmartMemory <strong>DDR3</strong> DIMMs in 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

6<br />

DIMM Type 1 or 2 DIMMs per channel 3 DIMMs per channel<br />

1600 MT/s<br />

RDIMMs<br />

1333 MT/s<br />

RDIMMs<br />

1333 MT/s<br />

LRDIMMs<br />

1333 MT/s<br />

UDIMMs<br />

1600 @ 1.5V (SmartMemory)<br />

1600 @ 1.5V (3 rd Party)<br />

1333 @ 1.35V (SmartMemory)<br />

1333 @ 1.5V (3 rd Party)<br />

1333 @ 1.35V (SmartMemory)<br />

1333 @ 1.5V (3 rd Party)<br />

1333 MT/s (SmartMemory)<br />

1066 MT/s (3 rd Party)<br />

<strong>HP</strong> Advanced Memory Error Detection<br />

1333 @ 1.5V (SmartMemory)<br />

1066 @ 1.5V (3 rd Party)<br />

1066 @ 1.35V (SmartMemory)<br />

1066 @ 1.5V (3 rd Party)<br />

1066 @ 1.35V (SmartMemory)<br />

1066 @ 1.5V (3 rd Party)<br />

Not Supported<br />

Over the past five years, the average size of server <strong>memory</strong> configurations has increased by more than 500%. With<br />

these increased <strong>memory</strong> capacities, increases in <strong>memory</strong> errors are unavoidable. Fortunately, most <strong>memory</strong> errors are<br />

both transient <strong>and</strong> correctable. Current <strong>memory</strong> subsystems can correct up to a 4 bit <strong>memory</strong> error in the 64 bits of data<br />

that are transferred in each <strong>memory</strong> cycle.<br />

<strong>HP</strong> Advanced Memory Error Detection technology introduces refinements to error detection technology. Instead of<br />

simply counting each correctable <strong>memory</strong> error, this new technology analyzes all correctable errors to determine which<br />

ones have a higher probability of leading to uncorrectable errors in the future. Using this advanced approach, <strong>HP</strong><br />

Advanced Memory Error Detection is able to better monitor the <strong>memory</strong> subsystem <strong>and</strong> increase the effectiveness of the<br />

Pre-Failure Alert notification.<br />

All <strong>ProLiant</strong> <strong>Gen8</strong> servers feature <strong>HP</strong> Advanced Memory Error Detection. For more information on this technology, see<br />

the <strong>HP</strong> Advanced Memory Error Detection Technology technology brief at<br />

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02878598/c02878598.pdf<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for servers <strong>with</strong> Intel®<br />

Xeon® E5-2600 series processors<br />

Overview<br />

The <strong>DDR3</strong> <strong>memory</strong> architecture for <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>with</strong> E5-2600 series processors features several<br />

advancements over <strong>ProLiant</strong> G6 <strong>and</strong> G7 servers, including the following:<br />

• An increase to 4 <strong>memory</strong> channels per processor<br />

• A maximum <strong>memory</strong> speed of 1600 MT/s <strong>with</strong> the capability to support up to 1866 MT/s on future processor models.<br />

• Support for <strong>HP</strong> SmartMemory <strong>with</strong> extended performance features over 3 rd Party <strong>memory</strong>.<br />

• Support for LRDIMM technology, which allows three quad-ranked DIMMs per channel.<br />

Figure 1shows a block diagram of this new <strong>memory</strong> architecture.


Figure 1. <strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for servers <strong>using</strong> the E5-2600 family processor series<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the Intel® Xeon® E5-2600 series processors<br />

As shown in Table 4, there are several models of 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers that use the Intel Xeon E5-2600 family of<br />

processors.<br />

Table 4. 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> E5-2600 series processors<br />

<strong>HP</strong> <strong>ProLiant</strong> server model Number of DIMM slots Maximum Memory<br />

DL380p <strong>Gen8</strong> 24 768 GB<br />

DL360p <strong>Gen8</strong> 24 768 GB<br />

BL460c <strong>Gen8</strong> 16 512 GB<br />

ML350p <strong>Gen8</strong> 24 768 GB<br />

SL230s <strong>Gen8</strong> 16 512 GB<br />

SL250s <strong>Gen8</strong> 16 512 GB<br />

SL270s <strong>Gen8</strong> 16 512 GB<br />

<strong>ProLiant</strong> <strong>Gen8</strong> Intel® Xeon® E5-2600 series processors<br />

There are a number of processor models of the Intel Xeon E5-2600 series processor family. Processor models differ in<br />

their number of cores, maximum processor frequency, amount of cache <strong>memory</strong>, <strong>and</strong> features supported (such as Intel<br />

Hyper-Threading Technology). In addition, different processor models support different maximum <strong>memory</strong> speeds. This<br />

affects the maximum performance <strong>and</strong> the power consumption of the <strong>memory</strong> subsystem <strong>and</strong> of the server in general.<br />

7


Table 5. <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> Intel Xeon E5-2600 Series Processor Family<br />

8<br />

Processor Model<br />

Number<br />

CPU Frequency Level 3 Cache<br />

Size<br />

Maximum Memory<br />

Speed<br />

E5-2690 2.90 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2680 2.70 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2670 2.60 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2667 2.90 GHz 15MB 1600 MT/s 12.8GB/s<br />

E5-2665 2.40 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2660 2.20 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2650 2.00 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2650L 1.80 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2643 3.30 GHz 10MB 1600 MT/s 12.8GB/s<br />

E5-2637 3.00 GHz 15MB 1600 MT/s 12.8GB/s<br />

Maximum Memory<br />

Throughput (per<br />

channel)<br />

E5-2640 2.50 GHz 15MB 1333 MT/s 10.6 GB/s<br />

E5-2630 2.30 GHz 15MB 1333 MT/s 10.6 GB/s<br />

E5-2630L 2.00 GHz 15MB 1333 MT/s 10.6 GB/s<br />

E5-2620 2.00 GHz 15MB 1333 MT/s 10.6 GB/s<br />

E5-2609 2.40 GHz 10MB 1066 MT/s 8.5 GB/s<br />

E5-2603 1.80 GHz 10MB 1066 MT/s 8.5 GB/s<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for servers <strong>using</strong> Intel®<br />

Xeon® E5-2400 series processors<br />

Overview<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the Intel E5-2400 series processors feature a <strong>memory</strong> architecture that is similar to other<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers. This architecture supports the same <strong>DDR3</strong> speeds as other <strong>ProLiant</strong> <strong>Gen8</strong> servers. It also<br />

supports <strong>HP</strong> SmartMemory. However, the <strong>memory</strong> architecture for these <strong>ProLiant</strong> <strong>Gen8</strong> servers has three <strong>memory</strong><br />

channels per processor <strong>and</strong> supports a maximum of two DIMMs in each channel. An overview of this architecture is<br />

shown in Figure 2.


Figure 2. <strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> architecture for servers <strong>using</strong> the E5-2400 series processors<br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> Intel® Xeon® E5-2400 series processors<br />

There are several different models of the 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers that use the Intel Xeon E5-2400 family of<br />

processors. These are shown in Table 6.<br />

Table 6. 2P <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> Intel Xeon E5-2400 series processors<br />

<strong>HP</strong> <strong>ProLiant</strong> server model Number of DIMM slots Maximum Memory<br />

DL380e <strong>Gen8</strong> 12 384 GB<br />

DL360e <strong>Gen8</strong> 12 384 GB<br />

BL420c <strong>Gen8</strong> 12 384 GB<br />

ML350e <strong>Gen8</strong> 12 384 GB<br />

<strong>ProLiant</strong> <strong>Gen8</strong> Intel® Xeon® E5-2400 series processors<br />

There are a number of processor models in the Intel Xeon E5-2400 series processor family. As <strong>with</strong> the E5-2600 series,<br />

these processor models differ in their number of cores, maximum processor frequency, amount of cache <strong>memory</strong>, <strong>and</strong><br />

features supported (such as Intel Hyper-Threading Technology). They too have different models supporting different<br />

maximum <strong>memory</strong> speeds.<br />

9


Table 7. <strong>ProLiant</strong> <strong>Gen8</strong> E5-2400 series processors<br />

Processor Model<br />

Number<br />

10<br />

CPU Frequency Level 3 Cache<br />

Size<br />

Maximum Memory<br />

Speed<br />

E5-2450 2.10 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2450L 1.80 GHz 20MB 1600 MT/s 12.8GB/s<br />

E5-2430 2.20 GHz 15MB 1600 MT/s 12.8GB/s<br />

E5-2420 1.90 GHz 15MB 1600 MT/s 12.8GB/s<br />

E5-2407 2.20 GHz 10MB 1600 MT/s 12.8GB/s<br />

E5-2403 1.80 GHz 10MB 1600 MT/s 12.8GB/s<br />

<strong>DDR3</strong> DIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

Maximum Memory<br />

Throughput (per<br />

channel)<br />

<strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the Intel Xeon processors support the use of <strong>DDR3</strong> DIMMs specified at speeds of both<br />

1600 MT/s (PC3-12800) <strong>and</strong> 1333 MT/s (PC3-10600). Table 8 lists the <strong>DDR3</strong> DIMMs that are qualified for Intel-based <strong>HP</strong><br />

<strong>ProLiant</strong> <strong>Gen8</strong> servers. <strong>HP</strong> part descriptions use codes from the JEDEC st<strong>and</strong>ard for specifying DIMM type <strong>and</strong> speed.<br />

Figure 3 explains how to decode the part number descriptions.<br />

Table 8. <strong>HP</strong> <strong>DDR3</strong> DIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

Registered DIMMs (RDIMM) <strong>HP</strong> Part Number<br />

<strong>HP</strong> 4GB (1x4GB) Single Rank x4 PC3L-10600R (<strong>DDR3</strong>-1333) Registered CAS-9 Low Voltage<br />

Memory Kit<br />

647893-B21<br />

<strong>HP</strong> 4GB (1x4GB) Single Rank x4 PC3-12800R (<strong>DDR3</strong>-1600) Registered CAS-11 Memory Kit 647895-B21<br />

<strong>HP</strong> 8GB (1x8GB) Dual Rank x4 PC3L-10600R (<strong>DDR3</strong>-1333) Registered CAS-9 Low Voltage<br />

Memory Kit<br />

647897-B21<br />

<strong>HP</strong> 8GB (1x8GB) Single Rank x4 PC3-12800R (<strong>DDR3</strong>-1600) Registered CAS-11 Memory Kit 647899-B21<br />

<strong>HP</strong> 16GB (1x16GB) Dual Rank x4 PC3L-10600R (<strong>DDR3</strong>-1333) Registered CAS-9 Low Voltage<br />

Memory Kit<br />

647901-B21<br />

<strong>HP</strong> 16GB (1x16GB) Dual Rank x4 PC3-12800R (<strong>DDR3</strong>-1600) Registered CAS-11 Memory Kit 672631-B21<br />

Unbuffered <strong>with</strong> ECC DIMMs (UDIMM)<br />

<strong>HP</strong> 2GB (1x2GB) Single Rank x8 PC3L-10600E (<strong>DDR3</strong>-1333) Unbuffered CAS-9 Low Voltage<br />

Memory Kit<br />

<strong>HP</strong> 4GB (1x4GB) Dual Rank x8 PC3L-10600E (<strong>DDR3</strong>-1333) Unbuffered CAS-9 Low Voltage<br />

Memory Kit<br />

<strong>HP</strong> 8GB (1x8GB) Dual Rank x8 PC3L-10600E (<strong>DDR3</strong>-1333) Unbuffered CAS-9 Low Voltage<br />

Memory Kit<br />

Load Reduced DIMMs (LRDIMM)<br />

<strong>HP</strong> 32GB (1x32GB) Quad Rank x4 PC3L-10600L (<strong>DDR3</strong>-1333) Load Reduced CAS-9 Low Voltage<br />

Memory Kit<br />

HyperCloud DIMMs (HDIMM)<br />

<strong>HP</strong> 16GB (1x16GB) Dual Rank x4 PC3-10600H (<strong>DDR3</strong>-1333) HyperCloud CAS-9 FIO Memory Kit<br />

(Factory Install Only)<br />

647905-B21<br />

647907-B21<br />

647909-B21<br />

647903-B21<br />

678279-B21


Figure 3. <strong>HP</strong> <strong>DDR3</strong> Memory Part Number Decoder<br />

Populating <strong>memory</strong> in <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

<strong>ProLiant</strong> <strong>Gen8</strong> <strong>memory</strong> slot configurations<br />

The <strong>ProLiant</strong> <strong>Gen8</strong> servers feature three different <strong>memory</strong> slot configurations:<br />

• Either 24 or 16 <strong>memory</strong> slots total for servers <strong>using</strong> E5-2600 series processors.<br />

• 12 <strong>memory</strong> slots total for servers <strong>using</strong> E5-2400 series processors<br />

For <strong>ProLiant</strong> <strong>Gen8</strong> servers, we recommend populating all <strong>memory</strong> channels whenever possible. This ensures the best<br />

<strong>memory</strong> performance.<br />

Population rules for <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

For optimal performance <strong>and</strong> functionality, it is necessary to adhere to the following population rules. Violations may<br />

result in reduced <strong>memory</strong> capacity, or error messages during boot.<br />

Rules for populating processors <strong>and</strong> DIMM slots<br />

• Install DIMMs only if the corresponding processor is installed.<br />

• If only one processor is installed in a two-processor system, only half of the DIMM slots are available.<br />

• To maximize performance, we recommend balancing the total <strong>memory</strong> capacity between all installed processors <strong>and</strong><br />

load the channels similarly whenever possible.<br />

• When two processors are installed, balance the DIMMs across the two processors.<br />

• White DIMM slots denote the first slot to be populated in a channel.<br />

• Place the DIMMs <strong>with</strong> the highest number of ranks in the white slot when mixing DIMMs of different ranks on the same<br />

channel.<br />

11


Rules for DIMM types<br />

• Do not mix UDIMMs, RDIMMs, or LRDIMMs.<br />

• Quad rank RDIMMs are not supported in <strong>ProLiant</strong> <strong>Gen8</strong> servers.<br />

• LRDIMMs are capable of up to three DIMMs per channel.<br />

• RDIMMs operating at either 1.35V or 1.5V may be mixed in any order, but the system will operate at the higher<br />

voltage.<br />

• DIMMs of different speeds may be mixed in any order. The server will select the lowest common speed.<br />

General rules<br />

• The maximum <strong>memory</strong> speed is a function of the <strong>memory</strong> type, <strong>memory</strong> configuration, processor model, <strong>and</strong> settings<br />

in ROM BIOS.<br />

• The maximum <strong>memory</strong> capacity is a function of the <strong>memory</strong> type <strong>and</strong> number of installed processors.<br />

• To realize the performance <strong>memory</strong> capabilities listed in this document, <strong>HP</strong> SmartMemory is required.<br />

DIMM Population Order<br />

Figure 4 shows the <strong>memory</strong> slot configuration for 24-slot 2P <strong>ProLiant</strong> DL380p <strong>Gen8</strong> server. In this drawing, the first<br />

<strong>memory</strong> slots for each channel on each processor are the white <strong>memory</strong> slots (A, B, C, <strong>and</strong> D).<br />

Figure 4. DIMM slots <strong>and</strong> population order for 24 slot 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers.<br />

In general, <strong>memory</strong> population order follows the same logic for all <strong>ProLiant</strong> servers - although the processors may not<br />

be located <strong>with</strong> a different physical arrangement relative to each other in some servers. To populate the server <strong>memory</strong><br />

in the correct order, you should use the following rules:<br />

• When a single processor is installed in the system, install DIMMs in sequential alphabetical order – A, B, C, D… <strong>and</strong> so<br />

on.<br />

• When 2 processors are installed in the server, install DIMMs in sequential alphabetical order – P1-A, P2-A, P1-B, P2-<br />

B… <strong>and</strong> so on.<br />

• Within a given channel, you should populate DIMMs from the heaviest electrical load (dual-rank) to the lightest load<br />

(single-rank).<br />

12


For more information, you should consult the User Guide:<br />

hp.com > support & drivers > product support & troubleshooting > enter your product<br />

Figure 5 shows the <strong>memory</strong> slot configuration for 16 slot 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers. The configuration is similar to the to<br />

the 24 slot servers. However, 16 slot servers have only 2 DIMM slots per channel. Once again, the first <strong>memory</strong> slots for<br />

each channel on each processor are the white <strong>memory</strong> slots (A, B, C, <strong>and</strong> D). You should populate the <strong>memory</strong> for 16 slot<br />

servers <strong>using</strong> the same rules as those for 24 slot servers.<br />

Figure 5. DIMM slots <strong>and</strong> population order for 16 slot 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers.<br />

Figure 6 shows the <strong>memory</strong> slot configuration for the 12 slot 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers that use the E5-2400 series<br />

processors. These servers have only three <strong>memory</strong> channels per processor. The first <strong>memory</strong> slots for each channel on<br />

each processor are the white <strong>memory</strong> slots (A, B, <strong>and</strong> C). You should populate the <strong>memory</strong> for 12 slot servers <strong>using</strong> the<br />

same general rules as those for 24 <strong>and</strong> 16 slot servers.<br />

13


Figure 6. DIMM slots <strong>and</strong> population order for 12 slot 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers.<br />

Memory system operating speeds<br />

All <strong>DDR3</strong> DIMMs for <strong>ProLiant</strong> <strong>Gen8</strong> servers operate natively at either 1600 MT/s or 1333 MT/s. However, the final<br />

operating speed of the <strong>memory</strong> system for the server depends on the type of DIMMs you install as well as the number of<br />

DIMMs you install per channel. Larger configurations <strong>using</strong> 2 or 3 DIMMs per <strong>memory</strong> channel may operate at a slower<br />

speed than the native speed of the DIMMs. Table 9 shows the <strong>memory</strong> system operating speed for the servers based on<br />

the type <strong>and</strong> number of DIMMs you install.<br />

Table 9. <strong>ProLiant</strong> <strong>Gen8</strong> server <strong>memory</strong> operating speeds for different DIMM configurations<br />

14<br />

RDIMMs<br />

St<strong>and</strong>ard Voltage<br />

(1.5V)<br />

RDIMMs<br />

Low Voltage (1.35V)<br />

LRDIMMs<br />

Low Voltage<br />

(1.35V)<br />

UDIMMs<br />

Low Voltage (1.35V)<br />

1 DIMM per channel 1600 MT/s 1333 MT/s 1333 MT/s 1333 MT/s<br />

2 DIMMs per channel 1600 MT/s 1333 MT/s 1333 MT/s 1333 MT/s<br />

3 DIMMs per channel 1333 MT/s<br />

(<strong>with</strong> RBSU option)<br />

1066 MT/s 1066 MT/s Not Supported<br />

Mixing DIMM speeds in a configuration is allowed. However, the following will apply.<br />

• The system processor speed rules always override the DIMM capabilities.<br />

• When mixing DIMMs of different speeds, the <strong>memory</strong> system will use the clock rate of the slowest DIMM in the server.<br />

• Both processors will operate at the same <strong>memory</strong> clock rate.<br />

General population guidelines<br />

When configuring a 2P <strong>ProLiant</strong> <strong>Gen8</strong> server, you can achieve a good balance between performance, power usage, <strong>and</strong><br />

cost by following these general guidelines.<br />

• Populate all channels of each processor (For 2 processor systems, this means populating in groups of 6 or 8 identical<br />

DIMMs).


• Use the same <strong>HP</strong> SmartMemory part number in each <strong>memory</strong> channel.<br />

Optimizing <strong>memory</strong> configurations<br />

By taking advantage of the different DIMM types, sizes <strong>and</strong> speeds available for <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers, you can<br />

optimize server <strong>memory</strong> configuration to meet different application or datacenter requirements.<br />

Optimizing for capacity<br />

You can maximize <strong>memory</strong> capacity on <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> the new 32 GB LRDIMMs. With LRDIMMs you can<br />

install up to three quad-ranked DIMMs in a <strong>memory</strong> channel, which was not possible <strong>with</strong> earlier <strong>ProLiant</strong> G6 or G7<br />

<strong>ProLiant</strong> servers. On 24 slot servers, you can configure the system <strong>with</strong> up to 768 GB of total <strong>memory</strong>.<br />

Table 10 shows the maximum <strong>memory</strong> capacities for <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> each of the 3 DIMM types.<br />

Table 10. Maximum <strong>memory</strong> capacities for 2P <strong>ProLiant</strong> <strong>Gen8</strong> servers <strong>using</strong> different DIMM types<br />

Number of DIMM Slots DIMM Type Maximum Capacity Configuration<br />

24 UDIMM 128 GB 16 x 8GB 2R<br />

RDIMM 384 GB 24 x 16GB 2R<br />

LRDIMM 768 GB 24 x 32GB 4R<br />

16 UDIMM 128 GB 16 x 8GB 2R<br />

RDIMM 256 GB 16 x 16GB 2R<br />

LRDIMM 512 GB 16 x 32GB 4R<br />

12 UDIMM 96 GB 12 x 8GB 2R<br />

Optimizing for performance<br />

RDIMM 192 GB 12 x 16GB 2R<br />

LRDIMM 384 GB 12 x 32GB 4R<br />

The two primary measurements of <strong>memory</strong> subsystem performance are throughput <strong>and</strong> latency. Latency is a measure<br />

of the time it takes for the <strong>memory</strong> subsystem to begin to deliver data to the processor core after the processor makes a<br />

request. Throughput measures the total amount of data that the <strong>memory</strong> subsystem can transfer to the system<br />

processor(s) during a given period.<br />

Factors influencing latency<br />

Unloaded <strong>and</strong> loaded latencies are a measure of the efficiency of the <strong>memory</strong> sub-section in a server. Memory latency in<br />

servers is usually measured from the time of a read request in the core of a processor until the data is supplied to that<br />

core. This is also called load-to-use. Unloaded latency measures the latency when the system is idle <strong>and</strong> represents the<br />

lowest latency that the system can achieve for <strong>memory</strong> requests for a given processor/<strong>memory</strong> combination. Loaded<br />

latency is the latency when the <strong>memory</strong> subsystem is saturated <strong>with</strong> <strong>memory</strong> requests. Loaded latency will always be<br />

greater than unloaded latency.<br />

There are a number of factors that influence <strong>memory</strong> latency in a system.<br />

• DIMM Speed. Faster DIMM speeds deliver lower latency, particularly loaded latency. Under loaded conditions, the<br />

primary contributor to latency is the time <strong>memory</strong> requests spend in a queue waiting to be executed. The faster the<br />

DIMM speed, the more quickly the <strong>memory</strong> controller can process the queued comm<strong>and</strong>s. For example, Memory<br />

running at 1600 MT/s has about 20% lower loaded latency than <strong>memory</strong> running at 1333 MT/s.<br />

• Ranks. For the same <strong>memory</strong> speed <strong>and</strong> DIMM type, more ranks will result in lower loaded latency. More ranks give<br />

the <strong>memory</strong> controller a greater capability to parallelize the processing of <strong>memory</strong> requests. This results in shorter<br />

request queues <strong>and</strong> therefore lower latency.<br />

15


• CAS latency. CAS (Column Address Strobe) latency represents the basic DRAM response time. It is specified as the<br />

number of clock cycles (e.g. 6, 7, 11) that the controller must wait after asserting the Column Address signal before<br />

data is available on the bus. CAS latency plays a larger role in determining the unloaded latency than loaded latency.<br />

Figure 7 shows both unloaded <strong>and</strong> loaded latency numbers for various <strong>DDR3</strong> DIMMs when used in one DIMM per channel<br />

configurations. As this chart illustrates, the idle latency is almost the same for every DIMM type <strong>and</strong> capacity. This is<br />

because the primary component of idle latency is the <strong>memory</strong> system overhead of performing a basic <strong>memory</strong> read or<br />

write operation, which is the same for all DIMM types regardless of their speed.<br />

Figure 7. Idle <strong>and</strong> Loaded Latencies for various <strong>DDR3</strong> DIMMs on 2P <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

Factors influencing <strong>memory</strong> throughput<br />

Factors affecting <strong>memory</strong> throughput include the number of <strong>memory</strong> channels populated, number of ranks on the<br />

channel, channel interleaving, <strong>and</strong> the speed at which the <strong>memory</strong> runs.<br />

Number of <strong>memory</strong> channels <strong>and</strong> throughput<br />

The largest impact on throughput is the number of <strong>memory</strong> channels populated. By interleaving <strong>memory</strong> access across<br />

multiple <strong>memory</strong> channels, the integrated <strong>memory</strong> controllers are able to increase <strong>memory</strong> throughput significantly.<br />

Optimal throughput <strong>and</strong> latency are achieved when all channels of each installed CPU are populated identically.<br />

As Figure 8 shows, adding a second DIMM to the system (<strong>and</strong> thus populating the second <strong>memory</strong> channel) essentially<br />

doubles system read throughput. Gains in throughput for each additional DIMM installed are almost linear until all eight<br />

<strong>memory</strong> channels are populated.<br />

16<br />

Latency (ns)<br />

160<br />

140<br />

120<br />

100<br />

80<br />

60<br />

40<br />

20<br />

0<br />

Latency by DIMM type <strong>and</strong> capacity at 1DPC<br />

<strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

2 GB 4GB 8GB 4GB 4GB 8GB 8GB 16GB 16GB 32GB<br />

1Rx8 2Rx8 2Rx8 1Rx4 1Rx4 2Rx4 1Rx4 2Rx4 2Rx4 4Rx4<br />

1333 U 1333 U 1333 U 1333 R 1600 R 1333 R 1600 R 1333 R 1600 R 1333 L<br />

Idle Latency<br />

Loaded Latency


Figure 8. System throughput <strong>with</strong> 1,2,4,8 channels populated in a 2 processor system.<br />

Throughput (GB/s)<br />

100.00<br />

80.00<br />

60.00<br />

40.00<br />

20.00<br />

0.00<br />

Maximum read throughput as a function of<br />

number of populated channels<br />

16 GB 2R <strong>DDR3</strong>-1600 at 1 DIMM per channel<br />

1 2 4 6 8<br />

Memory speed <strong>and</strong> throughput<br />

Higher <strong>memory</strong> speeds increase throughput. Using a one DIMM per channel configuration, Figure 9 shows that system<br />

<strong>memory</strong> throughput at 1333 MT/s is 20% higher than at 1066 MT/s. Throughput at 1600 MT/s <strong>memory</strong> speed is 17%<br />

higher than at 1333 MT/s.<br />

17


Figure 9. Throughput at different DIMM speeds <strong>with</strong> 1 DIMM per channel <strong>using</strong> 16GB 1600 RDIMMs.<br />

Number of DIMMs per channel <strong>and</strong> throughput<br />

Figure 10 shows the measured <strong>memory</strong> throughput for several one <strong>and</strong> two DIMM per channel configurations.<br />

Throughput actually decreases when a second DIMM is added to each channel. With 2 DIMMs per channel installed, the<br />

<strong>memory</strong> controllers use more of the comm<strong>and</strong> bus b<strong>and</strong>width issuing refresh cycles to the additional ranks on each<br />

channel. This reduces the comm<strong>and</strong> bus b<strong>and</strong>width that is available to issue read <strong>and</strong> write requests <strong>and</strong> thus causes a<br />

reduction in overall throughput. Application workloads that are more sensitive to <strong>memory</strong> capacity than throughput will<br />

still benefit from the second DIMM on each channel.<br />

18<br />

Memory throughput as a function of Memory Speed<br />

(Using 8 x 16GB 2R 1600 RDIMMs at 1 DIMM per channel)<br />

GB/s Throughput<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

800 MT/s 1066 MT/s 1333 MT/s 1600 MT/s


Figure 10. Throughput by DIMM Type at 1 <strong>and</strong> 2 DIMMs per channel<br />

Throughput (GB/s)<br />

90<br />

85<br />

80<br />

75<br />

70<br />

65<br />

60<br />

55<br />

50<br />

Throughput by DIMM Type<br />

At 1 <strong>and</strong> 2 DIMMs per channel<br />

2R 1333 UDIMM 2R 1333 RDIMM 2R 1600 RDIMM<br />

Throughput benefits of two DIMMs per Channel at 1333 MT/s<br />

Although Intel’s design supports two UDIMMs per channel running at 1066 MT/s, <strong>HP</strong> has engineered <strong>ProLiant</strong> <strong>Gen8</strong><br />

servers to reliability operate at 1333 MT/s <strong>with</strong> 2 DIMMs per channel when <strong>using</strong> <strong>HP</strong> SmartMemory. Enabling 1333 MT/s<br />

operation for two UDIMMs per channel increases throughput by about 22% <strong>and</strong> decreases loaded latency by 34% over<br />

two DIMMs per channel operating at 1066MHz.<br />

Table 11. Increased throughput <strong>with</strong> two 4GB 2R LV UDIMMs per channel at 1333 MT/s versus 1066 MT/s<br />

2 DPC @ 1066 MT/s 2DPC@ 1333 MT/s Delta<br />

Throughput (GB/s) 59.55 72.30 22% higher<br />

Idle Latency (ns) 65.34 64.97 .5% lower<br />

Loaded Latency (ns) 196.40 129.20 34% lower<br />

Idle Power (16 DIMM) (W) 2.40 2.40 0%<br />

Loaded Power (16 DIMMs) (W) 30.93 39.57 28% higher<br />

Power benefits of multiple DIMMs per channel at 1.35V<br />

<strong>HP</strong> has also engineered its <strong>DDR3</strong> <strong>memory</strong> to operate at lower voltage than st<strong>and</strong>ard industry <strong>memory</strong>. <strong>HP</strong> SmartMemory<br />

RDIMMs can operate at 1.35V at three DIMMs per channel (3 DPC) at 1066 MT/s. St<strong>and</strong>ard RDIMMs must operate at 1.5V<br />

at three DIMMs per channel. As shown in Table 12, 3 DPC operation at 1.35V in a saves almost 20 watts of power in a<br />

fully configured 24 slot server.<br />

Table 12. Lower system power consumption <strong>with</strong> three 8 GB LV RDIMMs per channel at 1.35V vs. 1.5V operation in a 24 slot <strong>HP</strong> <strong>ProLiant</strong><br />

<strong>Gen8</strong> server<br />

3 DPC @ 1.5V 3DPC @ 1.35V Delta<br />

Throughput (GB/s) 60.5 60.5 0%<br />

Idle Power (W) 16.6 15.5 7% lower<br />

Loaded Power (W) 98.6 78.8 20% lower<br />

1 DIMM / channel<br />

2 DIMMs / channel<br />

19


<strong>HP</strong> 1333 MT/s SmartMemory LRDIMMs are capable of operating at 1.35V at one <strong>and</strong> two DIMMs per channel <strong>and</strong> 1066<br />

MT/s at three DIMMs per channel. St<strong>and</strong>ard RDIMMs require 1.5V operation to maintain 1333 MT/s speed at one <strong>and</strong> two<br />

DIMMs per channel. As Table 13 shows, <strong>using</strong> <strong>HP</strong> SmartMemory saves about 20% on power consumption while providing<br />

the same performance as st<strong>and</strong>ard DIMMs in 2 DIMM per channel configurations.<br />

Table 13. Lower power consumption <strong>with</strong> two 32 GB LRDIMMs per channel at 1.35V vs. 1.5V operation<br />

20<br />

2 DPC @ 1.5V 2DPC @ 1.35V Delta<br />

Throughput (GB/s) 68.07 68.11 0%<br />

Idle Power 38.4 35.32 9% lower<br />

Loaded Power (W) 139.40 110.8 20% lower<br />

Mixing DIMM sizes<br />

There are no performance implications for mixing sets of different capacity DIMMs at the same operating speed. For<br />

example, latency <strong>and</strong> throughput will not be negatively impacted by installing 8 x 4GB single-rank <strong>DDR3</strong>-1333 DIMMs<br />

(one per channel), plus 8 x 8GB dual-rank <strong>DDR3</strong>-1333 DIMMs (one per channel).<br />

General guidelines<br />

For optimal throughput <strong>and</strong> latency, populate all four channels of each installed CPU identically.<br />

Optimizing for lowest power consumption<br />

Several factors determine the power that a DIMM consumes in a system. These include the DIMM technology used as<br />

well as the DIMM’s capacity, its number of ranks, <strong>and</strong> its operating speed. Let’s take a quick look at each of these to see<br />

how they affect power consumption.<br />

DIMM types <strong>and</strong> power consumption<br />

Because they do not use any buffering, UDIMMs are the lowest power consuming DIMM type. As Figure 11 shows, 4GB<br />

UDIMMs consume about 35% less power than the comparable RDIMM. In general larger capacity DIMMs, which are<br />

powering multiple ranks of DRAMs, consume more power. However, on a per gigabyte basis they are more efficient. A<br />

32GB LRDIMM consumes 9 watts under load, but this is about one-half the power per GB of an 8 GB RDIMM.


Figure 11. Power by DIMM capacity. Loaded <strong>and</strong> Idle power for each DIMM type <strong>and</strong> capacity supported on <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

DIMM Power (W)<br />

12<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

Power consumed by DIMM type <strong>and</strong> capacity<br />

<strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers<br />

2 GB 4GB 8GB 4GB 4GB 8GB 8GB 16GB 16GB 32GB<br />

1Rx8 2Rx8 2Rx8 1Rx4 1Rx4 2Rx4 1Rx4 2Rx4 2Rx4 4Rx4<br />

1333 U 1333 U 1333 U 1333 R 1600 R 1333 R 1600 R 1333 R 1600 R 1333 L<br />

Idle Power<br />

Loaded Power<br />

Memory speed <strong>and</strong> power consumption<br />

As you would expect, DIMMs running at higher speeds consume more power than the same DIMMs running at a lower<br />

speed. Memory operating at 1600 MT/s consumes about 30% more power under loaded conditions than the same<br />

<strong>memory</strong> running at 1333 MT/s.<br />

21


Figure 12. Power by <strong>memory</strong> speed <strong>using</strong> 8 x 8GB DIMMs installed at 1DIMM per channel.<br />

50<br />

45<br />

40<br />

35<br />

30<br />

25<br />

20<br />

15<br />

10<br />

5<br />

0<br />

General guidelines when optimizing for power consumption<br />

When optimizing for lowest power consumption, you can use the following general rules.<br />

• If you can meet your <strong>memory</strong> size requirements <strong>with</strong> them, use UDIMMs instead of RDIMMs. With their additional<br />

<strong>memory</strong> channels, you can configure 2P <strong>ProLiant</strong> 24 slot <strong>Gen8</strong> servers <strong>with</strong> as much as 128 GB <strong>using</strong> UDIMMs.<br />

• Use the smallest number of DIMMs possible, by <strong>using</strong> the highest capacity DIMM available.<br />

• For additional power savings <strong>with</strong> any <strong>memory</strong> configuration, you can run <strong>memory</strong> at the slowest speed possible.<br />

With <strong>HP</strong> <strong>ProLiant</strong> <strong>Gen8</strong> servers, this is 800 MT/s.<br />

Optimizing for Resiliency<br />

<strong>DDR3</strong> DIMMs may be constructed <strong>using</strong> either 4-bit wide (x4) or 8-bit wide DRAM chips. Current ECC algorithms used in<br />

the <strong>memory</strong> controllers are capable of detecting <strong>and</strong> correcting <strong>memory</strong> errors up to 4 bits wide. For DIMMS constructed<br />

<strong>using</strong> x4 DRAMs, this means that an entire DRAM chip on the <strong>memory</strong> module can fail <strong>with</strong>out ca<strong>using</strong> a failure of the<br />

module itself. DIMMs constructed <strong>using</strong> x8 DRAMs cannot tolerate the failure of DRAM chip. The ECC algorithm can<br />

detect the failure, but it cannot correct it. As a result, systems configured <strong>with</strong> DIMMs <strong>using</strong> x4 DRAMs are safer from<br />

potential <strong>memory</strong> failures than those <strong>using</strong> <strong>memory</strong> consisting of x8 DRAMs. While all UDIMMs are made <strong>with</strong> x8 DRAMs,<br />

RDIMMs <strong>and</strong> LRDIMMs may be constructed <strong>with</strong> x4 or x8 DRAMs. To provide the highest levels of availability <strong>and</strong><br />

resiliency for <strong>ProLiant</strong> <strong>Gen8</strong> servers, all <strong>HP</strong> SmartMemory RDIMMs <strong>and</strong> LRDIMMs use only x4 DRAMs.<br />

You can increase the resiliency of servers <strong>with</strong> UDIMMs by selecting Lock-Step Mode through the ROM-based Setup<br />

Utility (RBSU). This allows an entire x8 DRAM failure, but reduces the <strong>memory</strong> b<strong>and</strong>width of the server by 50%.<br />

22<br />

Total <strong>memory</strong> power consumption<br />

8GB DIMMs installed at 1 DIMM per channel<br />

8 GB 2R UDIMMs 1333 8 GB 2R RDIMMs 1333 8 GB 1R RDIMMs 1600<br />

Idle Power (W) Loaded Power (W)


Underst<strong>and</strong>ing unbalanced <strong>memory</strong> configurations<br />

Unbalanced <strong>memory</strong> configurations are those in which the installed <strong>memory</strong> is not distributed evenly across the<br />

<strong>memory</strong> channels <strong>and</strong>/or the processors. ISS discourages unbalanced configurations because they will always have<br />

lower performance than similar balanced configurations. There are two types of unbalanced configurations, each <strong>with</strong><br />

its own performance implications.<br />

• Unbalanced across channels. A <strong>memory</strong> configuration is unbalanced across channels if the <strong>memory</strong> capacities<br />

installed on each of the 4 channels of each installed processor are not identical.<br />

• Unbalanced across processors. A <strong>memory</strong> configuration is unbalanced across processors if a different amount of<br />

<strong>memory</strong> is installed on each of the processors.<br />

Memory configurations that are unbalanced across channels<br />

In unbalanced <strong>memory</strong> configurations across channels, the <strong>memory</strong> controller will split <strong>memory</strong> up into regions, as<br />

shown in Figure 13. Each region of <strong>memory</strong> will have different performance characteristics. The <strong>memory</strong> controller<br />

groups <strong>memory</strong> across channels as much as possible to create the regions. It will create as many regions as possible<br />

<strong>with</strong> DIMMs that span all four <strong>memory</strong> channels, since these have the highest performance. Next, it will move to create<br />

regions that span two <strong>memory</strong> channels <strong>and</strong> then to just one.<br />

Figure 13. A <strong>memory</strong> configuration that is unbalanced across <strong>memory</strong> channels<br />

The primary effect of <strong>memory</strong> configurations that are unbalanced across channels is a decrease in <strong>memory</strong> throughput<br />

in those regions that span fewer <strong>memory</strong> channels. In the example above, measured <strong>memory</strong> throughput in Region 2<br />

may be as little as 25% of the throughput in Region 1.<br />

Memory configurations that are unbalanced across Processors<br />

Figure 14 shows a <strong>memory</strong> configuration that is unbalanced across processors. The CPU1 threads operating on the<br />

larger <strong>memory</strong> capacity of CPU1 may have adequate local <strong>memory</strong> <strong>with</strong> relatively low latencies. The CPU2 treads<br />

operating on the smaller <strong>memory</strong> capacity of CPU2 may consume all available <strong>memory</strong> on CPU2 <strong>and</strong> request remote<br />

<strong>memory</strong> from CPU1. The longer latencies associated <strong>with</strong> the remote <strong>memory</strong> will result in reduced performance of<br />

those threads. In practice, this may result in non-uniform performance characteristics for program threads depending<br />

on which processor executes them.<br />

23


Figure 14. A <strong>memory</strong> configuration that is unbalanced across processors.<br />

BIOS Settings for <strong>memory</strong><br />

The <strong>HP</strong> server BIOS provides control over several <strong>memory</strong> configuration settings for <strong>ProLiant</strong> <strong>Gen8</strong> servers. You can<br />

access <strong>and</strong> change these settings <strong>using</strong> the ROM Based Setup Utility (RBSU), which is part of all <strong>HP</strong> <strong>ProLiant</strong> servers. To<br />

launch RBSU, press the F9 key during the server boot sequence.<br />

Controlling Memory Speed<br />

Setting Maximum Memory Bus Frequency<br />

Using RBSU, you can set the speed at which the system <strong>memory</strong> runs to a specific value. This function is available from<br />

the Power Management Options menu inside RBSU. With <strong>Gen8</strong> servers, <strong>memory</strong> bus speed can be set to any of the<br />

following:<br />

• Automatic (speed determined according to normal population rules)<br />

• 1333 MHz (MT/s)<br />

• 1066 MHz (MT/s)<br />

• 800 MHz (MT/s)<br />

Setting the <strong>memory</strong> speed to a lower value (1066 MHz or 800MHz, for example) lowers power consumption. However, it<br />

will also lower the performance of the <strong>memory</strong> system.<br />

24


Setting Memory Interleave<br />

Diasabling Memory Interleaving<br />

This option is available from the Advanced Power Management menu in RBSU. Disabling <strong>memory</strong> interleaving saves<br />

some power per DIMM, but also decreases <strong>memory</strong> system performance.<br />

Setting Node Interleaving<br />

This option is available from the RBSU Advanced Options menu <strong>and</strong> controls how the server maps the system <strong>memory</strong><br />

across the processors. When node interleaving is disabled (the default), BIOS maps the system <strong>memory</strong> such that the<br />

<strong>memory</strong> addresses for the DIMMs attached to a given processor are together, or contiguous. In typical applications this<br />

arrangement is more efficient, allowing the processors to directly access the <strong>memory</strong> addresses containing the code <strong>and</strong><br />

data for the programs they are executing. When Node Interleaving is enabled, system <strong>memory</strong> addresses are alternated,<br />

or interleaved, across the DIMMs installed on both processors. In this case, each successive page in the system <strong>memory</strong><br />

map is physically located on a DIMM attached to a different processor. There may be some workloads, in particular those<br />

<strong>using</strong> shared data sets, that will see improved performance <strong>with</strong> Node Interleaving enabled.<br />

Figure 15. Node Interleaving setting in the ROM-Based Setup Utility (RBSU)<br />

25


For more information<br />

Visit the URLs listed below if you need additional information.<br />

26<br />

Resource description Web address<br />

Online <strong>DDR3</strong> Memory Configuration Tool<br />

<strong>DDR3</strong> <strong>memory</strong> technology<br />

Technology brief, 2 nd edition<br />

<strong>HP</strong> Advanced Memory Error Detection Technology<br />

Technology brief<br />

Get connected<br />

hp.com/go/getconnected<br />

Current <strong>HP</strong> driver, support, <strong>and</strong> security alerts<br />

delivered directly to your desktop<br />

www.hp.com/go/ddr3<strong>memory</strong>-configurator<br />

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0212649<br />

9/c02126499.pdf<br />

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0287859<br />

8/c02878598.pdf<br />

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change <strong>with</strong>out notice. The only<br />

warranties for <strong>HP</strong> products <strong>and</strong> services are set forth in the express warranty statements accompanying such products <strong>and</strong> services. Nothing herein<br />

should be construed as constituting an additional warranty. <strong>HP</strong> shall not be liable for technical or editorial errors or omissions contained herein.<br />

Microsoft <strong>and</strong> Windows are U.S. registered trademarks of Microsoft Corporation. AMD is a trademark of Advanced Micro Devices, Inc. Intel <strong>and</strong> Xeon<br />

are trademarks of Intel Corporation in the U.S. <strong>and</strong> other countries. Oracle <strong>and</strong> Java are registered trademarks of Oracle <strong>and</strong>/or its affiliates.<br />

Created in August 2012


Appendix A - Sample Configurations for 2P <strong>ProLiant</strong> <strong>Gen8</strong><br />

servers<br />

24 DIMM slot servers <strong>using</strong> Intel® Xeon® E5-2600 processor series<br />

Sample Memory<br />

Configurations (2CPU)<br />

Total<br />

Memory<br />

(GB)<br />

Number<br />

of DIMMs<br />

DIMM<br />

Size<br />

DIMM<br />

Ranks<br />

DIMM<br />

Type<br />

(UDIMM,<br />

RDIMM,<br />

LRDIMM)<br />

Data<br />

Rate<br />

DIMMs<br />

per<br />

Channel<br />

Unloaded<br />

Latency<br />

(ns)<br />

Loaded<br />

Latency<br />

(ns)<br />

Throughput<br />

(GB/s)<br />

8 x 2GB 1R 1333 U 16 8 2GB 1R UDIMM 1333 1 65.0 136.3 74.3 0.6 11.8<br />

8 x 4GB 2R 1333 U 32 8 4GB 2R UDIMM 1333 1 64.6 140.3 76.1 1.1 18.7<br />

8 x 4GB 1R 1333 R 32 8 4GB 1R RDIMM 1333 1 65.3 136.0 74.0 2.1 22.2<br />

8 x 4GB 1R 1600 R 32 8 4GB 1R RDIMM 1600 1 65.0 114.0 81.0 3.6 38.6<br />

16 x 2GB 1R 1333 U 32 16 2GB 1R UDIMM 1333 2 65.0 125.2 73.5 1.1 29.7<br />

8 x 8GB 2R 1333 U 64 8 8GB 2R UDIMM 1333 1 65.7 138.8 76.9 1.4 15.0<br />

8 x 8GB 2R 1333 R 64 8 8GB 2R RDIMM 1333 1 65.0 140.0 77.7 3.6 35.5<br />

8 x 8GB 1R 1600 R 64 8 8GB 1R RDIMM 1600 1 65.3 115.5 79.8 4.5 46.1<br />

16 x 4GB 2R 1333 U 64 16 4GB 2R UDIMM 1333 2 65.0 129.2 72.3 2.3 39.6<br />

16 x 4GB 1R 1333 R 64 16 4GB 1R RDIMM 1333 2 65.0 145.5 74.8 5.6 47.3<br />

16 x 4GB 1R 1600 R 64 16 4GB 1R RDIMM 1600 2 64.6 115.2 84.3 9.7 78.9<br />

24 x 4GB 1R 1333 R 96 24 4GB 1R RDIMM 1066 3 65.7 196.0 61.4 8.2 45.6<br />

24 x 4GB 1R 1600 R 96 24 4GB 1R RDIMM 1333 3 65.3 134.8 72.9 14.0 75.6<br />

8 x 16GB 2R 1333 R 128 8 16GB 2R RDIMM 1333 1 65.7 138.9 75.3 5.1 42.6<br />

8 x 16GB 2R 1600 R 128 8 16GB 2R RDIMM 1600 1 65.3 111.0 87.7 6.0 48.6<br />

16 x 8GB 2R 1333 U 128 16 8GB 2R UDIMM 1333 2 65.3 153.7 72.0 2.9 35.1<br />

16 x 8GB 2R 1333 R 128 16 8GB 2R RDIMM 1333 2 65.0 152.1 74.8 8.8 74.0<br />

16 x 8GB 1R 1600 R 128 16 8GB 1R RDIMM 1600 2 64.6 115.8 85.3 11.6 90.0<br />

8 x 32GB 4R 1333 L 256 8 32GB 4R LRDIMM 1333 1 66.1 122.1 72.4 18.3 77.7<br />

16 x 16GB 2R 1333 R 256 16 16GB 2R RDIMM 1333 2 65.7 150.7 72.6 11.7 81.2<br />

16 x 16GB 2R 1600 R 256 16 16GB 2R RDIMM 1600 2 65.0 121.4 83.7 13.8 94.2<br />

24 x 16GB 2R 1333 R 384 24 16GB 2R RDIMM 1066 3 66.1 161.4 60.0 17.5 79.2<br />

24 x 16GB 2R 1600 R 384 24 16GB 2R RDIMM 1066 3 65.4 161.9 59.5 20.3 87.9<br />

24 x 16GB 2R 1333 H 384 24 16GB 2R HDIMM 1333 2 65.7 114.0 72.4 144.5 286.1<br />

16 x 32GB 4R 1333 L 512 16 32GB 4R LRDIMM 1333 2 66.8 138.9 68.1 35.3 110.8<br />

24 x 32GB 4R 1333 L 768 24 32GB 4R LRDIMM 1066 3 70.9 235.0 40.4 55.9 121.6<br />

24 x 8GB 2R 1333 R 192 24 8GB 2R RDIMM 1066 3 65.3 189.0 60.5 15.5 78.8<br />

Idle<br />

Power<br />

(W)<br />

Loaded<br />

Power<br />

(W)<br />

27


16 DIMM Slot <strong>Servers</strong> <strong>using</strong> Intel® Xeon® E5-2600 series processors<br />

28<br />

Sample Memory<br />

Configurations<br />

Total<br />

Memory<br />

(GB)<br />

Number<br />

of DIMMs<br />

DIMM<br />

Size<br />

DIMM<br />

Rank<br />

DIMM<br />

Type<br />

(UDIMM,<br />

RDIMM,<br />

LRDIMM)<br />

Data<br />

Rate<br />

DIMMs<br />

per<br />

Channel<br />

Unloaded<br />

Latency<br />

(ns)<br />

Loaded<br />

Latency<br />

(ns)<br />

Throughput<br />

(GB/s)<br />

8 x 2GB 1R 1333 U 16 8 2GB 1R UDIMM 1333 1 65.0 136.3 74.3 0.56 11.8<br />

8 x 4GB 2R 1333 U 32 8 4GB 2R UDIMM 1333 1 64.6 140.3 76.1 1.13 18.7<br />

8 x 4GB 1R 1333 R 32 8 4GB 1R RDIMM 1333 1 65.3 136.0 74.0 2.14 22.2<br />

8 x 4GB 1R 1600 R 32 8 4GB 1R RDIMM 1600 1 65.0 114.0 81.0 3.61 38.6<br />

16 x 2GB 1R 1333 U 32 16 2GB 1R UDIMM 1333 2 65.0 125.2 73.5 1.12 29.7<br />

8 x 8GB 2R 1333 U 64 8 8GB 2R UDIMM 1333 1 65.7 138.8 76.9 1.41 15.0<br />

8 x 8GB 2R 1333 R 64 8 8GB 2R RDIMM 1333 1 65.0 140.0 77.7 3.64 35.5<br />

8 x 8GB 1R 1600 R 64 8 8GB 1R RDIMM 1600 1 65.3 115.5 79.8 4.45 46.1<br />

16 x 4GB 2R 1333 U 64 16 4GB 2R UDIMM 1333 2 65.0 129.2 72.3 2.25 39.5<br />

16 x 4GB 1R 1333 R 64 16 4GB 1R RDIMM 1333 2 65.0 145.5 74.8 5.55 47.3<br />

16 x 4GB 1R 1600 R 64 16 4GB 1R RDIMM 1600 2 64.6 115.2 84.3 9.67 78.9<br />

8 x 16GB 2R 1333 R 128 8 16GB 2R RDIMM 1333 1 65.7 138.9 75.3 5.10 42.6<br />

8 x 16GB 2R 1600 R 128 8 16GB 2R RDIMM 1600 1 65.3 111.0 87.7 5.98 48.6<br />

16 x 8GB 2R 1333 U 128 16 8GB 2R UDIMM 1333 2 65.3 153.7 72.0 2.87 35.1<br />

16 x 8GB 2R 1333 R 128 16 8GB 2R RDIMM 1333 2 65.0 152.1 74.8 8.79 74.0<br />

16 x 8GB 1R 1600 R 128 16 8GB 1R RDIMM 1600 2 64.6 115.8 85.3 11.57 90.0<br />

8 x 32GB 4R 1333 L 256 8 32GB 4R LRDIMM 1333 1 66.1 122.1 72.4 18.32 77.7<br />

16 x 16GB 2R 1333 R 256 16 16GB 2R RDIMM 1333 2 65.7 150.7 72.6 11.73 81.2<br />

16 x 16GB 2R 1600 R 256 16 16GB 2R RDIMM 1600 2 65.0 121.4 83.7 13.75 94.2<br />

16 x 32GB 4R 1333 L 512 16 32GB 4R LRDIMM 1333 2 66.8 138.9 68.1 35.32 110.8<br />

Idle<br />

Power<br />

(W)<br />

Loaded<br />

Power<br />

(W)


12 DIMM Slot <strong>Servers</strong> <strong>using</strong> Intel® Xeon® E5-2400 series processors<br />

Sample Memory<br />

Configurations<br />

Total<br />

Memory<br />

(GB)<br />

Number<br />

of DIMMs<br />

DIMM<br />

Size<br />

DIMM<br />

Rank<br />

DIMM<br />

Type<br />

(UDIMM,<br />

RDIMM,<br />

LRDIMM)<br />

Data<br />

Rate<br />

DIMMs<br />

per<br />

Channel<br />

Unloaded<br />

Latency<br />

(ns)<br />

Loaded<br />

Latency<br />

(ns)<br />

Throughput<br />

(GB/s)<br />

6 x 2GB 1R 1333 U 12 6 2GB 1R UDIMM 1333 1 71.4 104.4 52.1 0.5 8.0<br />

6 x 4GB 2R 1333 U 24 6 4GB 2R UDIMM 1333 1 71.0 104.4 48.2 0.8 12.9<br />

6 x 4GB 1R 1333 R 24 6 4GB 1R RDIMM 1333 1 71.4 104.8 47.8 1.4 14.8<br />

6 x 4GB 1R 1600 R 24 6 4GB 1R RDIMM 1600 1 66.7 94.1 57.5 1.6 25.2<br />

12 x 2GB 1R 1333 U 24 12 2GB 1R UDIMM 1333 2 71.0 108.2 52.7 0.9 21.9<br />

6 x 8GB 2R 1333 U 48 6 8GB 2R UDIMM 1333 1 71.8 106.1 49.9 1.1 10.4<br />

6 x 8GB 2R 1333 R 48 6 8GB 2R RDIMM 1333 1 71.0 101.9 50.1 2.1 25.2<br />

6 x 8GB 1R 1600 R 48 6 8GB 1R RDIMM 1600 1 68.0 95.5 59.7 1.7 21.7<br />

12 x 4GB 2R 1333 U 48 12 4GB 2R UDIMM 1333 2 71.4 109.9 48.3 1.7 29.2<br />

12 x 4GB 1R 1333 R 48 12 4GB 1R RDIMM 1333 2 71.0 106.9 52.7 4.6 33.7<br />

12 x 4GB 1R 1600 R 48 12 4GB 1R RDIMM 1600 2 66.7 95.0 58.7 3.8 55.9<br />

6 x 16GB 2R 1333 R 96 6 16GB 2R RDIMM 1333 1 71.8 106.9 50.5 3.2 29.4<br />

6 x 16GB 2R 1600 R 96 6 16GB 2R RDIMM 1600 1 68.0 95.0 55.0 3.4 33.4<br />

12 x 8GB 2R 1333 U 96 12 8GB 2R UDIMM 1333 2 72.3 111.4 49.9 2.1 25.7<br />

12 x 8GB 2R 1333 R 96 12 8GB 2R RDIMM 1333 2 71.0 108.6 48.5 4.9 53.6<br />

12 x 8GB 1R 1600 R 96 12 8GB 1R RDIMM 1600 2 68.0 96.8 55.9 4.0 51.1<br />

6 x 32GB 4R 1333 L 192 6 32GB 4R LRDIMM 1333 1 72.3 109.5 50.5 7.6 43.0<br />

12 x 16GB 2R 1333 R 192 12 16GB 2R RDIMM 1333 2 72.3 110.7 49.3 7.0 58.8<br />

12 x 16GB 2R 1600 R 192 12 16GB 2R RDIMM 1600 2 68.0 96.8 57.7 4.1 51.1<br />

12 x 32GB 4R 1333 L 384 12 32GB 4R LRDIMM 1333 2 72.3 121.5 50.2 15.5 68.5<br />

Idle<br />

Power<br />

(W)<br />

Loaded<br />

Power<br />

(W)<br />

29

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!