hpe reference architecture for microsoft exchange server 2016 on

17
HPE Reference Architecture for Microsoft Exchange Server 2016 on HPE ProLiant BL460c Gen9 and D6000 Storage Deploy 7,500 mailboxes with Exchange Server 2016 on RAID-less JBOD storage Technical white paper

Upload: lynga

Post on 14-Feb-2017

273 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: HPE Reference Architecture for Microsoft Exchange Server 2016 on

HPE Reference Architecture for Microsoft Exchange Server 2016 on HPE ProLiant BL460c Gen9 and D6000 Storage Deploy 7,500 mailboxes with Exchange Server 2016 on RAID-less JBOD storage

Technical white paper

Page 2: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper

Contents Executive summary .................................................................................................................................................................................................................................................................................................................................3 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 4 Design principles ...................................................................................................................................................................................................................................................................................................................................... 5

High availability (HA) and disaster recovery (DR) ............................................................................................................................................................................................................................................. 5 Scaling the solution beyond 7,500 mailboxes ......................................................................................................................................................................................................................................................... 6

Solution components ............................................................................................................................................................................................................................................................................................................................ 6 Enclosures and I/O modules .................................................................................................................................................................................................................................................................................................... 6 Servers ......................................................................................................................................................................................................................................................................................................................................................... 6 Operating system and application software ............................................................................................................................................................................................................................................................. 8

Best practices and configuration guidance for the solution ............................................................................................................................................................................................................................. 8 Exchange 2013 Performance Health Checker Script ...................................................................................................................................................................................................................................... 12

Capacity and sizing .............................................................................................................................................................................................................................................................................................................................. 13 Performance ......................................................................................................................................................................................................................................................................................................................................... 13 Capacity .................................................................................................................................................................................................................................................................................................................................................... 13 Analysis and recommendations ......................................................................................................................................................................................................................................................................................... 13

Summary ........................................................................................................................................................................................................................................................................................................................................................ 15 Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................... 15

Appendix A: Bill of materials ........................................................................................................................................................................................................................................................................................................ 16 Resources and additional links .................................................................................................................................................................................................................................................................................................. 17

Page 3: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 3

Executive summary This white paper is intended for customers who want to migrate to Microsoft® Exchange Server 2016 and who have standardized on HPE BladeSystem servers. Exchange Server 2016 is one of the latest releases in a long line of Exchange Server releases. Over the years, Exchange has changed in a number of ways. It moved from a 32bit to a 64bit architecture and has changed storage input/output (I/O) requirements such that slower, lower cost disks can be utilized. The high availability model was modified such that Exchange native data protection can be the default high availability solution. Many other architectural advances have been implemented that changed system requirements, some of which change the focus from very high performance storage, to higher performance servers with RAM and CPU capabilities to address the new application requirements.

Deploying Exchange Server 2016 on older servers can require more servers to be deployed, depending on the user profile and other solution parameters. In one example, using earlier generations of servers could result in needing from two to seven more servers to support the same Exchange Server 2016 workload operating on 5 HPE ProLiant BL460c Gen9 servers. In addition to server costs, this would also increase administration, networking and other infrastructure costs. Deploying Exchange Server 2016 on the most recent HPE ProLiant generation can help reduce the number of servers required, while also reducing networking, power and administrative costs. An in-place upgrade using existing hardware used to host earlier versions of Exchange Server is not possible, so new server hardware and storage capacity is required to deploy Exchange Server 2016.

This white paper describes deploying Microsoft Exchange Server 2016 on HPE ProLiant BL460c Gen9 server blades in an HPE BladeSystem c7000 enclosure with direct attached storage (DAS), utilizing HPE D6000 disk enclosures. This implementation ensures a highly available solution while minimizing costs. This tested solution supports a customer use case with 7,500 10GB mailboxes at a profile of 150 total messages sent and received per day per mailbox. Rather than using RAID to provide high availability (HA) at the storage layer, this solution utilizes the native data protection features of Exchange Server 2016 through implementation of a database availability group (DAG) with three copies of each database among five servers. The databases and transaction logs are stored on individual RAID-less disks in a “just a bunch of disks”, or JBOD configuration.

Deploying the solution on HPE BladeSystem capitalizes on the power and cooling savings of the BladeSystem infrastructure along with simplifying data center operations and offering greater density, shared resources, scalability and greater overall efficiency. For more information about the benefits of deploying HPE BladeSystem, please refer to the IDC Top 5 Reasons to Move to BladeSystem, presentation: http://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4AA5-7828ENW

The Microsoft Preferred Architecture (PA) for Exchange is utilized as a guide in deploying this solution. Where possible the PA is followed. Encryption of data at rest utilizing BitLocker is also discussed, along with key system components, such as Trusted Platform Module (TPM) 1.2, and CPUs that support the Intel® AES-NI instruction set to accelerate encryption and reduce the storage performance impact of encryption.

In summarizing testing with Microsoft Exchange Jetstress and LoadGen, this white paper shows that this solution can support the target 7,500 mailboxes with a 150 messages per day per mailbox profile in several different scenarios. The solution supports a two server outage scenario, where even with two servers unavailable, the solution can support a peak load of 300 messages per mailbox per day.

Exchange Server 2016 was released in September of 2015. It advances on Exchange Server 2013 with more efficient cataloging and search capabilities, simplified server architecture and deployment models, faster and more reliable eDiscovery, and expanded data loss prevention features. Sizing for Exchange 2016 is very similar to the sizing process for Exchange 2013, which is demonstrated with the test results later in this document, as compared to the reference architecture at: http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA6-2902ENW

Page 4: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 4

Solution overview This solution is built around an HPE BladeSystem c7000 enclosure, a D6000 disk enclosure, five BL460c Gen9 servers, and internal components to be discussed later in this document. This solution also utilizes the native data protection features of Exchange 2016 with multiple copies of each database. Storage costs are reduced by using direct attached storage (DAS) in a just a bunch of disks (JBOD) configuration for Exchange databases and logs, which means that no RAID protection is provided by the storage subsystem. Each physical 6TB SAS 7.2K 3.5in disk hosts three copies of various databases, and each database has three copies spread among the five servers.

Figure 1. Logical Exchange Server 2016 diagram with DAG, servers and some database copies.

In the above figure, each Exchange server is hosted on a ProLiant BL460c Gen9 server in a BladeSystem c7000 enclosure, which is attached to a D6000 disk enclosure to provide storage capacity for the Exchange databases.

In this solution, each server has a pair of boot/system disks which are protected in a RAID1 pair, and each server is provisioned fourteen disks from the D6000, which are utilized in a JBOD configuration where each physical disk is its own array and logical drive. With twelve disks for databases, the extra two disks per server are for recovery space and repair in the event of a disk failure.

A general view of the solution in a single BladeSystem c7000 enclosure and D6000 disk enclosure is outlined below in Figure 2. The details of the solution components are discussed in the next section.

Page 5: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 5

Figure 2. Front and rear views of BladeSystem c7000 enclosure with five server blades and D6000 disk enclosure

Each of the five BL460c Gen9 servers contain 96GB of RAM, two E5-2630 v3 CPUs, one HPE FlexFabric 20Gb 2-port 650FLB FlexibleLOM, one Smart Array P244br controller and one Smart Array P741m controller. The Smart Array P244br controller is used for the internal boot/system drives in each server, while the P741m controller is installed in mezzanine slot 2 in order to interface with HPE 6Gb SAS Switches in I/O bays 5 and 6 of the c7000 enclosure. For Ethernet connectivity the c7000 enclosure contains HPE Virtual Connect FlexFabric 10Gb/24-port Modules in I/O bays 1 and 2.

Design principles Microsoft Exchange Server is one of the, if not the, most widely used business productivity applications. It is a mission critical service for many businesses where business quickly comes to a halt if the service is not available. In order to meet business requirements, the Exchange service should be designed around Service Level Agreements (SLA) which can include performance levels, uptime requirements, capacity requirements, and recovery time.

High availability (HA) and disaster recovery (DR) This solution utilizes the native data protection features of Exchange as one layer of high availability (HA). In this solution, up to two servers can fail or otherwise be offline and each database will still have at least one active copy to serve users. Each server uses a RAID1 logical drive for Windows Boot/System and for Exchange transport databases, but the Exchange mailbox databases and logs are held on single physical disks on a RAID0 logical drive. The servers are sized to provide the Exchange service even when two servers are unavailable. The load increases on the three remaining servers, but they are sized to handle that increased load in the failure scenario.

Another aspect of high availability is managing single points of failure. The HPE BladeSystem c7000 enclosure in this solution has redundant power supplies, fans, Onboard Administrator (OA) modules, and redundant Virtual Connect FlexFabric 10Gb/24-port and SAS modules. Each of

Page 6: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 6

these redundant components needs to be managed accordingly, such as the power supplies being configured in a suitable HA mode and being supplied by separate and redundant power sources, and the OA modules being on separate networks.

The storage subsystem also has some level of HA built-in. The D6000 disk enclosure has two drawers, each with 35 LFF disk bays. Each drawer has redundant power supplies, redundant fans, and redundant data paths to the c7000 enclosure. The 6TB SAS 7.2K 3.5in disk drives each have two I/O ports, so a loss of a single SAS path will not cause loss of access to the disk drives.

The Microsoft PA recommends not using NIC teaming in order to simplify the failover model. While the PA is Microsoft’s recommendations based on their deployment methods, not all installations will be at the same scale as that deployment. In smaller scale, an Exchange deployment may not span data centers, or even racks within the same data center, so other deployment options should be considered.

For network connectivity redundancy of each of the BL460c Gen9 servers, while NIC teaming is not recommended in the Microsoft PA, the most efficient way to implement network HA in this configuration is to use Windows NIC teaming. With NIC teaming, the failure of any single NIC on a server will not impact the solution and neither will the failure of either of the Virtual Connect FlexFabric 10Gb/24-port Modules.

For further HA, the solution can be deployed across multiple geographic sites. Because of the details of such a deployment, such as namespace design, activation scenarios, quorum placement, etc. this type of deployment is outside the scope of this document.

Scaling the solution beyond 7,500 mailboxes The solution can be scaled beyond 7,500 mailboxes simply by using this solution as a 7,500 mailbox building block, meaning that for each 7,500 mailbox increment, another five servers and D6000 are deployed. As the solution grows, multiple c7000 enclosures would be utilized and database copies should be distributed among the c7000 enclosures such that not all copies of a database would reside on a single D6000 disk enclosure or c7000 enclosure. Since this solution was not tested beyond 7,500 mailboxes, further discussion of this is outside the scope of this document.

Solution components This Exchange 2016 solution for 7,500 10GB mailboxes utilizes the following HPE BladeSystem components: c7000 enclosure, Virtual Connect FlexFabric 10Gb/24-port Modules and BL460c Gen9. The D6000 disk enclosure is outside of the c7000 enclosure. Each of these are covered in more detail as follows.

Enclosures and I/O modules This solution is designed around the HPE BladeSystem c7000 enclosure with six platinum power supplies and two Onboard Administrator (OA) modules. For network I/O, two Virtual Connect FlexFabric 10Gb/24-port Modules are utilized for redundant Ethernet connectivity within the enclosure. While each server utilizes one NIC as per the Microsoft Preferred Architecture, connecting all of the NICs to a single non-redundant switch or I/O module would create a single point of failure for all of the servers. In this solution, two Ethernet I/O modules are used to provide network HA within the enclosure and to external top of rack switches.

For connectivity to the D6000 disk enclosure HPE 6Gb SAS Switch Dual Pack for HPE BladeSystem c-Class is installed in I/O bays 5 and 6 of the c7000 enclosure. These I/O modules are connected to the D6000 as shown in Figure 2 above. Four of the HPE Ext Mini SAS 1m Cables are used to cable the D6000 to the SAS I/O modules.

Servers HPE ProLiant BL460c Gen9 server blades are utilized in this solution. Each of the servers is configured with 96GB of RAM, and with two Intel Xeon® E5-2630 v3 CPUs, for 16 cores total. As noted in the Exchange Team Blog (http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx) sizing guidance, these servers are deployed with Hyper-Threading turned off. Also, as noted in these sizing recommendations: https://technet.microsoft.com/en-us/library/dn879075(v=exchg.150).aspx, the servers should also be configured in the BIOS to allow the OS to manage power, and Windows® should be set to the high performance power plan.

Each server is configured with a pair of internal 2TB SAS 7.2K 2.5in HDDs for boot and system. The disks in the D6000 for Exchange databases, logs, recovery and automatic reseeding are discussed in the Storage section below.

An internal view of the BL460c Gen9 server is shown in Figure 3. More information about the BL460c Gen9 servers is available at: hpe.com/servers/bl460c

Page 7: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 7

Figure 3. Internal view of BL460c Gen9

Storage The D6000 disk enclosure is utilized for database and transaction log files, and recovery, maintenance and automatic reseeding capacity. The D6000 can hold a total of seventy large form factor (LFF) 3.5 inch drives. Those seventy drives are divided between two drawers of thirty-five drives each. This solution uses all seventy drive bays in the D6000 with seventy HPE 6TB 6G SAS 7.2K rpm LFF (3.5-inch) drives.

To utilize the storage in the D6000, each of the Exchange servers is configured with a Smart Array P741m controller. The RAID features of the Smart Array controller are not used on these database and log disks as they are configured as JBOD, where each physical disk is its own array and RAID0 logical drive. Each server uses twelve disks for active and passive database copies, with another two disks per server for recovery, auto-reseed or maintenance space.

The D6000 is shown below in Figure 4. More information about the D6000 is available at: http://www8.hp.com/us/en/products/disk-enclosures/product-detail.html?oid=7390970

Information about deployment and cabling of the D6000 is available at: http://h20565.www2.hpe.com/portal/site/hpsc/template.PAGE/public/psi/manualsResults/?sp4ts.oid=5307027 and http://h20564.www2.hpe.com/portal/site/hpsc/public/kb/docDisplay/?docId=c01956983

The Microsoft Preferred Architecture outlines using Windows BitLocker for at rest data protection. For effective use of BitLocker, each server should be configured with the Trusted Platform Module (TPM) 1.2. This eases the use of BitLocker by storing and securing the encryption keys local to the server without requiring a BitLocker password each time the server boots. To also ease the performance impact of BitLocker, the CPUs used in this solution include the Intel AES-NI instruction set, which is used by BitLocker to reduce CPU and performance impact. More information about Intel AES-NI is available at: intel.com/content/dam/doc/white-paper/enterprise-security-aes-ni-white-paper.pdf

This solution was tested with unencrypted storage and with storage encrypted by Windows BitLocker and the performance difference was negligible. Information on deploying BitLocker is available at: https://technet.microsoft.com/en-us/library/hh831713.aspx and https://technet.microsoft.com/en-us/library/jj612864.aspx

Page 8: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 8

Figure 4. D6000 front external view.

Operating system and application software This solution is built on Windows Server 2012 R2 with all applicable updates installed via Windows Update as of the date of testing (December 2015): Version 6.3, Build 9600. Exchange Server 2016 (build 15.01.0225.042) was used for this solution. HPE Service Pack for ProLiant version 2015.06.0 was installed on each of the BL460c Gen9 servers for driver and firmware updates.

Microsoft Exchange LoadGen and Jetstress were used as the test tools for this solution. LoadGen was version 15.00.0847.030 and Jetstress was 15.00.0995.000. For Jetstress the required .dll and other support files were used from Exchange Server 2016 (build 15.01.0225.042).

Best practices and configuration guidance for the solution Microsoft recommends disabling Hyper-Threading when deploying Exchange on physical servers. This is not necessary in a virtualized environment as long as the additional CPUs are not used in capacity planning. In this solution, each server has the BIOS configured to disable Hyper-Threading and to set the power profile set to OS control with the Energy/Performance BIAS set to “Maximum Performance”. Figures 5 through 7 below show those settings.

Figure 5. Disabling Hyperthreading in the server BIOS.

Figure 6. Setting Power Profile to “Custom” and Power Regulator to “OS Control Mode”.

Page 9: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 9

Figure 7. Setting Energy/Performance Bias to “Maximum Performance”.

Key point These Exchange servers should be configured with Hyperthreading turned off and with the power profile set to OS controlled. Misconfiguring these settings can have a negative impact on performance.

The BladeSystem SAS switches must be configured to provision 14 drives for each of the five Exchange Servers. The HPE 6G Virtual SAS manager application is launched from the HPE BladeSystem Onboard Administrator web console. Zone Groups are created, populated with drive bays, and then assigned to server bays. A zone group is created for each Exchange server and is populated with drive bays according to the following table.

Table 1. SAS Zone group configuration parameters.

ZONE NAME ASSIGNED DRIVE BAYS ASSIGNED SERVER BAY

Exch01 Drawer 1: Bays 1-14 Server bay 6

Exch02 Drawer 1: Bays 15-28 Server bay 7

Exch03 Drawer 1: Bays 29-35 Drawer 2: Bays 29-35

Server bay 8

Exch04 Drawer 2: Bays 1-14 Server bay 14

Exch04 Drawer 2: Bays 15-28 Server bay 15

The Virtual SAS Manager provides a summary view of the SAS topology as shown below in Figure 8.

Page 10: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 10

Figure 8. Configured SAS Topology

After the zone groups are configured and assigned to a server bay, each server will see 14 new disk drives on the Smart Array P741m controller.

In this solution each one of those disk drives is configured as a separate array, with a single RAID0 logical drive. The view from one server is shown below in Figure 9 for arrays A through C. This configuration continues through array N.

Figure 9. HPE Smart Storage Administrator configuration.

Page 11: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 11

The above configuration can be accomplished through the GUI shown above, or it can be done at the command line. The default path for the command line utility is C:\Program Files\HP\hpssacli\bin. Once in that directory, a command line such as below, can be utilized to configure all 14 drives on the Smart Array P741m as single drive RAID0 logical drives:

1..14 | foreach ($_) {$drivestring= "51:1:$_" ; write-host ".\hpssacli.exe ctrl slot=2 create type=ld drives="$drivestring" raid=0 stripsize=256"}

Once the drives are configured and exposed to Windows as new disk drives, they must be initialized, mounted to mount points, and formatted. This solution utilized the mount point of C:\ExchangeVolumes with each disk mounted to a folder under that as shown in Figure 10 below.

Figure 10. Mount point root, mount points, volumes and multiple databases per volume

Either the Windows GUI or PowerShell can be used to initialize, mount and format the volumes. The details of these PowerShell commands are outside the scope of this document. Extreme care must be taken when automating disk provisioning in order to protect disks already in use that may contain operating system or other critical information. One difference between an Exchange 2016 deployment and an Exchange 2013 deployment is that the Windows volumes should be formatted with the Resilient Filesystem, or ReFs, for Exchange 2016.

Key points On the Smart Array controller, the logical drives should have a strip size of at least 256KB. When formatting the disks in Windows, the allocation unit size should be set to 64KB. The Smart Array controller cache was left at the default of 10% read and 90% write cache.

This solution utilizes 60 Exchange databases with three copies of those databases among the five servers. Figure 11 shows the distribution of those databases among the servers. The numbers in the cells represent the preferred, secondary and tertiary server designated for each database. Once the storage is configured, mount points created and database and log folders created within those paths, then the database distribution among the servers should be configured as shown below.

Page 12: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 12

Figure 11. Database copy distribution among the servers.

Exchange 2013 Performance Health Checker Script Microsoft provides an Exchange 2013 Performance Health Checker Script that checks common configuration settings such as product versions, pagefile settings, power plan settings, NIC settings, and processor/memory information. Example output for the Exchange servers is shown below in Figure 12. It is recommended to run the Exchange 2013 Health Checker Script periodically to ensure that your environment is operating at peak health and that configuration settings have not been inadvertently changed. Microsoft provides the Exchange 2013 Performance Health Checker Script at https://gallery.technet.microsoft.com/Exchange-2013-Performance-23bcca58. While this script was initially released for Exchange 2013, it still provides valuable information for Exchange 2016.

Figure 12. Example Health Checker script output.

Page 13: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 13

Capacity and sizing Performance Two tools are used to plan the performance and capacity requirements for this solution: the HPE Sizer for Microsoft Exchange Server 2013 (hpe.com/solutions/microsoft/exchange2013/sizer) and the Microsoft Exchange Server Role Requirement Calculator v7.8, which includes calculations for Exchange 2016 (https://gallery.technet.microsoft.com/office/Exchange-2013-Server-Role-f8a61780). The Exchange 2010 Processor Query tool (https://gallery.technet.microsoft.com/office/Exchange-Processor-Query-b06748a5) is also used with the Microsoft calculator to look up the SPECInt values for prospective CPUs. The HPE Sizer for Microsoft Exchange Server 2013 was used as sizing for Exchange 2016 is not that different than sizing for Exchange 2013.

In using either the HPE Sizer or the Microsoft Calculator, some characteristics of the workload are utilized for performance sizing. The number of messages sent and received per day per mailbox is a primary workload characteristic. This solution is sized around a profile of 150 messages sent and received per mailbox per day. Using that profile results in the five servers each sized with two Intel Xeon E5-2630 v3 processors with 96GB of RAM.

Table 2 shows the profile, CPU and RAM combination and what the expected CPU utilization is for different server failure scenarios. Actual test results are covered later in this paper.

Table 2. CPU and RAM configuration and expected CPU utilization.

MESSAGE PROFILE CPU (2X PER SERVER)

RAM (PER SERVER)

%CPU WITH FIVE SERVERS ONLINE

%CPU WITH FOUR SERVERS ONLINE

%CPU WITH THREE SERVERS ONLINE

150 msg/day/mbx E5-2630 v3 96GB 48% 54% 64%

Capacity This solution is designed such that each of the 7,500 mailboxes can be up to 10GB in capacity. With three active/passive database copies, this is efficiently done utilizing fourteen 6TB LFF 7.2K HDDs for each of the five servers. This capacity is optimized by using a RAID-less JBOD configuration where there is no RAID overhead as with RAID10 or RAID5 logical drives. While RAID5 minimizes the capacity overhead of RAID protection, performance is not typically adequate with RAID5 and 7.2K RPM disk drives, and with less mean time between failure (MTBF), there is greater risk of dataloss with 7.2K RPM drives in a RAID5 array during an array rebuild operation.

Analysis and recommendations Jetstress test results Multiple tests were performed with Microsoft Exchange Jetstress to test various aspects of the storage subsystem, as outlined below. In these tests, the target IOPS per mailbox is 0.13, or 195 IOPS/server with five servers online, or 325 IOPS/server with three servers online.

• Jetstress Test 1 – Normal Load, all five servers online. Goal = Target IOPS under latency thresholds.

• Jetstress Test 2 – Normal load, three servers online. Goal = Target IOPS under latency thresholds.

• Jetstress Test 3 – Very high load. Goal = Demonstrate near upper IOPS limit per server.

Page 14: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 14

The results of these three tests are shown in the table below.

Table 3. Summary of test results for Jetstress tests.

JETSTRESS TEST 1 JETSTRESS TEST 2 JETSTRESS TEST 3

I/O Profile Normal – 150 msg/day Normal – 150 msg/day Very high

Target IOPS / Server 195 325 Near Maximum

Number of Servers Online

5 3 5

Achieved IOPS / Server 327 454 1247

Database Read IOPS / Server 226 314 861

Database Write IOPS / Server 101 140 386

Average Database Read Latency (ms) 5.80 6.6 17.6

Average Database Write Latency (ms) 0.17 0.29 1.2

Transaction Log Writes / Sec 2.1 1.7 8.4

Transaction Log Write Latency (ms) 0.07 0.08 0.14

Table 3 shows that the storage subsystem easily meets the required IOPS for the target profile and has much I/O to spare before reaching 17.6ms database read latency. With a hard limit of 20ms, there is even more headroom to spare.

These Jetstress tests have shown that the storage subsystem can satisfy the storage I/O needs of this solution and that there is I/O headroom in the solution.

The following Microsoft Exchange LoadGen test results will address the CPU and memory requirements and performance of this solution.

LoadGen test results Microsoft Exchange LoadGen was used to more fully test this Exchange Server 2016 solution. While Jetstress simulates Exchange Server storage I/O on a server that is not running Exchange, LoadGen simulates client load on servers actually running Exchange 2016. This type of testing allows analysis of CPU and RAM utilization, latency and response of various Exchange subsystems, and measurement of actual messages sent and received by each mailbox to ensure the target profile is being simulated as accurately as possible.

The load of 150 messages/day represents the average load per mailbox, but peak times can frequently exceed that average, so a generally accepted practice is to test at twice the average level to simulate peak usage. Tests were thus run at an effective rate of 300 messages/day to simulate this peak impact and are shown below

For this solution, LoadGen was used to simulate three scenarios.

• LoadGen Test 1 – Normal load, all five servers online. Goal = meet target profile and analyze CPU, RAM and Exchange subsystems.

• LoadGen Test 2 – High stress load, five servers online. Goal = meet target profile and analyze CPU, RAM and Exchange subsystems.

• LoadGen Test 3 – High stress load, three servers online. Goal = meet target profile and analyze CPU, RAM and Exchange subsystems.

Page 15: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 15

Table 4. Summary of test results for LoadGen tests 1 – 3.

LOADGEN TEST 1 LOADGEN TEST 2 LOADGEN TEST 3

I/O Profile Normal – 150 msg/day Peak – 300 msg/day Peak – 300 msg/day

Measured messages/day 145 295 307

Number of servers online

5 5 3

Average CPU utilization % 53 73 79

MS Exchange RPC Client Access Connection Count 2944 2943 5058

MS Exchange RPC Client Access Active User Count 1070 1358 2311

MS Exchange RPC Client Access RPC Operations/sec 468 879 1490

MS Exchange RPC Client Access RPC Averaged Latency 4.6 6.5 6.1

Network Interface MBytes sent/sec 3.0 5.7 7.8

Network Interface MBytes received/sec 2..81 5.5 5.0

Average database read latency (ms) 8.2 10.6 9.5

Average database write latency (ms) 0.32 0.25 0.38

In the peak load scenarios where the CPU is slightly higher than the desired threshold of 70%, a deeper look at the CPU utilization shows that content indexing is utilizing a significant amount of CPU. In the scenarios with lower CPU utilization, the noderunner processes, which are the content indexing processes, were about 18% of the total CPU utilization. In scenarios with higher CPU utilization, the noderunner processes represented about 34% of the CPU utilization.

The LoadGen testing showed that the solution can support the target number of mailboxes at the target profile, even in peak load scenarios, and in failure scenarios where two servers are offline for maintenance or because of an unplanned outage.

Summary Properly sizing an Exchange Server 2016 solution can be challenging, and support becomes more challenging if it is not sized properly. This white paper outlined an Exchange solution designed and sized for 7,500 mailboxes of up to 10GB each. Testing showed that the storage, CPU, RAM and networking subsystems can support this solution at the 150 messages per day per mailbox profile for which it was designed in normal operations with all five servers online and in scenarios where up to two servers are offline. The solution was also designed and tested for peak load scenarios of 300 messages per day per mailbox in order to handle peak loads at the beginning of the work day, or during other high load times.

With the architectural changes of Exchange Server over the last several generations, Exchange requires more CPU and RAM resources. This need can best be met by the latest CPU and architecture of the HPE ProLiant BL460c Gen9 servers. Based on CPU comparisons in multiple sizing scenarios, deploying with the latest generation of ProLiant servers can reduce the number of servers required by two to seven servers depending on the specifics of the workload.

Implementing a proof-of-concept As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Page 16: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 16

Appendix A: Bill of materials

Note Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html

Table 5. Bill of materials

QTY PART NUMBER DESCRIPTION

1 681844-B21 HPE BLc7000 Platinum CTO with ROHS Trial IC License Single Phase

6 733459-B21 HPE 2650W Plat Ht Plg Pwr Supply Kit

2 571956-B21 HPE Virtual Connect FlexFabric 10Gb/24-port Module for c-Class BladeSystem

1 456204-B21 HPE BLc7000 Onboard Administrator with KVM Option

1 BK764A HPE 6Gb SAS Switch Dual Pack for HPE BladeSystem c-Class

4 407337-B21 HPE Ext Mini SAS 1m Cable

1 517520-B21 HPE BLc 6X Active Cool 200 FIO Fan Opt

5 488069-B21 HPE TPM Module Kit

5 727021-B21 HPE BL460c Gen9 10Gb/20Gb FLB CTO Blade

5 700764-B21 HPE FlexFabric 20Gb 2-port 650FLB FIO Adapter

5 726782-B21 HPE Smart Array P741m/4GB FBWC 12Gb 4-ports Ext Mezzanine SAS Controller

5 726994-L21 HPE BL460c Gen9 E5-2630v3 FIO Kit

5 726994-B21 HPE BL460c Gen9 E5-2630v3 Kit

30 726719-B21 HPE 16GB 2Rx4 PC4-2133P-R Kit

5 761871-B21 HPE Smart Array P244br/1G FIO Controller

10 765466-B21 2TB 12G SAS 7.2K 2.5in 512e SC HDD

1 K2Q12A HPE D6000 w/70 6TB 6G SAS 7.2K LFF (3.5in) Dual Port MDL HDD 420TB Bundle

Page 17: HPE Reference Architecture for Microsoft Exchange Server 2016 on

Technical white paper Page 17

Sign up for updates

Rate this document © Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows Server, and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries.

4AA6-3740ENW, January 2016

Resources and additional links To read more about HPE solutions for Exchange please refer to http://h17007.www1.hpe.com/us/en/enterprise/converged-infrastructure/info-library/index.aspx?app=microsoft_exchange

HPE Sizer for Microsoft Exchange Server 2013 hpe.com/solutions/microsoft/exchange2013/sizer

HPE BladeSystem hpe.com/info/bladesystem

HPE Servers hpe.com/servers

HPE Storage hpe.com/storage

HPE Networking hpe.com/us/en/networking.html

HPE Technology Consulting Services hpe.com/us/en/services/consulting.html

HPE Converged Infrastructure Library hpe.com/info/convergedinfrastructure

Contact Hewlett Packard Enterprise http://www8.hp.com/us/en/hpe/contact/contact.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.