Windows 2012 Server Performance Sensitive Configuration Parameters
When
you run a server system, default settings might not meet the performance
requirements of business needs. For
example, for some business needs lowest energy consumption is preferred and for
others it could be low latency and high throughput. This article will provide pointer to some of
the common configuration parameters associated with processor & network of
the Windows 2012 server.
Processor
Power Management
|
|
Performance
Boost Mode
|
»
The default value for Boost mode is 3 i.e.
Efficient Enabled
»
Turbo is enabled for High Performance power plans on all Intel
and AMD processors and it is disabled for Power Saver plans
|
Minimum
and Maximum Performance State
|
»
Processors change between performance
states (“P-states”) very quickly to match supply to demand, delivering
performance where necessary and saving energy when possible
»
Alter this setting if your server
has specific high-performance or minimum power-consumptions requirements
»
If your server requires ultra-low latency, invariant CPU
frequency, or the highest performance levels, you might not want the
processors switching to lower-performance states, and for such server, you
can cap the minimum processor performance state at 100 percent
|
Performance
Core Parking Maximum and Minimum Cores
|
»
Cores that are chosen to
“park” generally do not have any threads scheduled, and they will drop into
very low power states when they are not processing interrupts, DPCs, or other
strictly affinities work
»
Core parking can potentially
increase energy efficiency during lower usage periods on the server because
parked cores can drop into deep low-power states
»
The Processor Performance Core Parking Maximum Cores parameter
controls the maximum percentage of cores that can be unparked (available to
run threads) at any time, while the Processor
Performance Core Parking Minimum Cores parameter controls the minimum
percentage of cores that can be unparked
|
Performance
Core Parking Utility Distribution
|
»
Utility Distribution is an
algorithmic optimization in Windows Server 2012 that is designed to improve
power efficiency for some workloads
»
Utility Distribution is
enabled by default for the balanced power plans for some processors. It can
reduce processor power consumption by lowering the requested CPU frequencies
of workloads that are in a reasonably steady state
»
Utility Distribution is not
necessarily a good algorithmic choice for workloads that are subject to high
activity bursts or for programs where the workload quickly and randomly
shifts across processors. For such workloads, it is recommended to disable
the Utility Distribution
|
Networking
Subsytem
|
|
Enabling
Offload Features
|
»
Turning on network adapter
offload features is usually beneficial
»
You should enable offload
capabilities, if the reduced throughput is not expected to be a limitation
»
Some network adapters require
offload features to be independently enabled for send and receive paths
|
Enabling
RSS for Web Scenarios
|
»
RSS can improve web
scalability and performance when there are fewer network adapters than
logical processors on the server
»
When all the web traffic is
going through the RSS-capable network adapters, incoming web requests
from different connections can be simultaneously processed across different
CPUs
»
Performance can be severely
degraded if a non-RSS-capable network adapter accepts web traffic on a server
that has one or more RSS-capable network adapters
|
RSS
Profiles and RSS Queues
|
»
If logical processors seem to
be underutilized for receive traffic, try increasing the number of RSS queues
from the default of 2 to the maximum that is supported by your network
adapter
»
The default profile is NUMA Static
|
Increasing
Network Adapter Resources
|
»
Some network adapters set
their receive buffers low to conserve allocated memory from the host. The low
value results in dropped packets and decreased performance. Therefore, for
receive-intensive scenarios, it is recommended to increase the receive buffer
value to the maximum
|
Enabling
Interrupt Moderation
|
»
Consider interrupt moderation
for CPU-bound workloads, and consider the trade-off between the host CPU
savings and latency versus the increased host CPU savings because of more
interrupts and less latency
»
If the network adapter does
not perform interrupt moderation, but it does expose buffer coalescing,
increasing the number of coalesced buffers allows more buffers per send or
receive, which improves performance
|
Comments
Post a Comment