mcsa guide to networking with windows server …€“new-vmswitch -name setswitch1 -netadapter...
TRANSCRIPT
MCSA Guide to Networking with
Windows Server 2016, Exam 70-741
First Edition
Chapter 9
Implementing Advanced
Network Solutions
© 2018 Cengage. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website
for classroom use.
Objectives
9.1 Implement high-performance network solutions
9.2 Determine scenarios and requirements for
implementing software-defined networking
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring NIC Teaming (1 of 3)
• NIC teaming - allows multiple network interfaces to work in
tandem to increase available bandwidth
– Provides load balancing and fault tolerance
• Load balancing and failover (LBFO) - another term for NIC
teaming
• Load balancing - distributes traffic across two or more
interface increases the overall network throughput a server can
maintain
• Failover - is a server’s capacity to recover from network
hardware failure by having redundant hardware that can take
over immediately for failed hardware
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring NIC Teaming (2 of 3)
• Configure NIC teaming with Server Manager or PowerShell
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring NIC Teaming (3 of 3)
• Use the following PowerShell cmdlets:
– Get-NetLbfoTeam - shows a list of NIC teams on server
– New-NetLbfoTeam - creates a NIC team and adds
network adapters to it
– Remove-NetLbfoTeam - deletes a team
– Rename-NetLbfoTeam - renames a team
– Set-NetLbfoTeam - sets properties of an existing team
• To get help on using any of these PowerShell commands from
a PowerShell prompt, type get-help followed by the command
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming Modes (1 of 3)
• Two main NIC team modes:
– Teaming modes and balancing modes
• There are three teaming modes:
– Switch Independent - used to connect the NICs in a team
to separate switches for fault tolerance
– Static Teaming - used primarily for load balancing
– LACP (Link Aggregation Control Protocol) - allows a switch
to automatically identify ports a team member is connected
to and create a team dynamically
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming Modes (2 of 3)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming Modes (3 of 3)
• The load balancing mode determines how the server load-
balances outgoing data packets across NICs in the team:
– Address Hash - uses an algorithm based on the outgoing
packet’s properties to create a hash value
– Hyper-V Port - used when team members are connected to
a Hyper-V switch
– Dynamic - Default mode where traffic is distributed evenly
among all team members, including virtual NICs
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming on Virtual Machines (1 of 2)
• NIC teaming allows multiple network interfaces to work in
tandem to increase available bandwidth
– Also provides load balancing with fault tolerance
• On VMs, use the same procedure as on physical computers
– For most reliability, enable the feature first in the network
adapter’s Advanced Features dialog box
• NIC teaming can be configured only on VMs connected to
external virtual switches
• If NIC teaming is configured on the Hyper-V host server,
configuring it on VMs running on the host isn’t necessary
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming on Virtual Machines (2 of 2)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Implementing Switch Embedded
Teaming (1 of 2)
• Switch Embedded Teaming (SET) – a new feature in Windows Server
2016
– Allows up to eight identical physical adapters on the host system
to be configured as a team and mapped to a virtual switch
• To configure SET, create a new virtual switch and specify the network
adapters that should be members as in the following cmdlet:
– New-VMSwitch -Name SETSwitch1 -NetAdapter Ethernet1,
Ethernet2 -EnableEmbeddedTeaming $true
• Next, add the virtual network adapters that will communicate through
the SET-enabled switch:
– Add-VMNetworkAdapter -SwitchName SETSwitch1 -Name
Adapter1
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Implementing Switch Embedded
Teaming (2 of 2)
• SET supports remote direct memory access (RDMA) on virtual
network adapters
• You must enable RDMA on the virtual network adapter created
on the host system
• The following cmdlet provides an example of enable RDMA on a
virtual network adapter named vEthernet (Adapter1):
– Enable-NetAdapterRDMA “vEthernet (Adapter1)”
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
NIC Teaming versus SET
• Differences to help you determine when to use each technology:
– SET is a feature that is bound to Hyper-V virtual switches, so you can’t use it on
physical servers connected to the physical network
– Traditional NIC teaming is not compatible with RDMA or software defined
networking (SDN) version 2
– If you want a solution that provides active/standby operation, use NIC teaming;
with SET, all NICs are always active, providing both load balancing and fault
tolerance
– SET requires all NICs to be identical and operate at the same speed; with NIC
teaming, you can have NICs of different speeds as members of the same team
– NIC teaming supports receive side scaling (RSS, discussed later in this chapter);
SET does not
– Switch independent mode is the only teaming mode available for SET whereas
NIC teaming supports switch-independent, static, and LACP modes
– SET works best with 10 Gb adapters, which are expensive; NIC teaming works
well with slower NICs
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring Data Center
Bridging (1 of 2)
• Data center bridging (DCB) – provides additional features for use
in enterprise datacenters
– In particular when server clustering and SANs are in use
• DCB was designed to prevent delays in delivery of data in iSCSI
applications and create a “lossless” environment
• DCB improves performance in iSCSI deployments in the
following ways:
– Quality of Service (QoS)
– Deterministic performance
– DCB exchange
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring Data Center
Bridging (2 of 2)
• Windows Server 2016 supports DCB as an installable feature
using the Add Roles and Features Wizard in Server Manager
– Or the Install-WindowsFeature Data-Center-Bridging
PowerShell cmdlet
• If you want to manage DCB configuration:
– Turn the DCBX Willing parameter to false using Set-
NetQoSDcbxSetting -Willing $false
– Enable DCB on the network adapter using Enable-
NetAdapterQoS Ethernet, replacing “Ethernet” with the
name of your network adapter
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Configuring QoS with DCB
• Quality of Service (QoS) – allows you to configure priorities for
different types of network traffic
– So that delay-sensitive data is prioritized over regular data
• To enable QoS with DCB:
1. Install the DCB feature and enable it on your NICs as
described earlier
2. Create QoS policies. QoS policies define the traffic types
you wish to prioritize
3. Create QoS traffic classes that map traffic classes to the
QoS policies and assign a bandwidth percentage value to
each type of traffic
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Creating QoS Policies
• QoS policies are created using the New-NetQosPolicy cmdlet
• QoS policies include a descriptive name
– Also defines the type of traffic you wish to prioritize by specifying a destination port
and a priority number based on IEEE 802.1p from 0 to 7
• Example:
– This command creates a policy named SMBtraffic that assigns a priority value of 4:
New-NetQosPolicy SMBtraffic -SMB -Priority 4
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Creating QoS Traffic Classes
• Traffic classes map to QoS policies and assign bandwidth weight values to
each policy
– It is best to assign bandwidth values that add up to 100 so that each
bandwidth weight represents a percentage of total bandwidth
• This command creates the traffic class SMBtraffic that ties back to a QoS policy
with priority 4 and assigns a bandwidth weight of 25:
– New-NetQosTrafficClass SMBtraffic -Priority 4 -Algorithm ETS -
Bandwidth 25
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Machine Queue (1 of 2)
• Virtual machine queue (VMQ) – accelerates vNIC
performance by delivering packets from the external network
directly to the vNIC
– VMQ is enabled or disabled on each vNIC
• When enabled, a dedicated queue is created for the vNIC on
the physical NIC
– When packets arrive for the vNIC, they are delivered
directly to the VM
– Each queue can be serviced by a different CPU core on
the host computer
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Machine Queue (2 of 2)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Receive Side Scaling (1 of 2)
• Receive Side Scaling (RSS) – a feature for network drivers that
efficiently distributes the processing of incoming network traffic
among multiple CPU cores
– Enabled by default on Windows, but may need to be enabled
on individual network adapters
• Enable RSS on the NIC using the Enable-NetAdapterRSS
cmdlet
– On systems with a GUI, you enable and configure RSS
through the Properties dialog box of the network connection
• Use Set-NetAdapterRSS to fine-tune RSS
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Receive Side Scaling (2 of 2)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Receive Side Scaling
• Virtual receive side scaling (vRSS) – RSS for virtual adapters
– Requires that the physical NIC to which the virtual switch is
connected support VMQ
• The following cmdlet lists network adapters that support VMQ:
– Get-NetAdapterVmq
• To enable VMQ on a network adapter named Ethernet, use the
following cmdlet:
– Set-NetAdapterVmq Ethernet –Enabled $true
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Machine Multi-Queue
• Virtual machine multi-queue (VMMQ) – a new feature in Windows Server 2016
that reduces the overhead in getting packets from the physical network to a VM
on the host
• With VMMQ, a VM can be assigned multiple queues on the host adapter with
each queue having a vCPU core to process its incoming data
– The VM must be assigned multiple cores to take advantage of this feature
• To enable VMMQ on a VM, run the following cmdlet on the host computer:
– Set-VMNetworkAdapter VMName –VmmqEnable $true
• To see the status of VMMQ, use the following cmdlet and scroll down to the
VmmqEnabled row:
– Get-VMNetworkAdapter VMName | f1
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SR-IOV (1 of 2)
• DFS fault tolerance ensures continuous access to user’s files
• Load balancing is achieved when a user requests a file in a
replicated DFS namespace and is directed, via a referral, to one
of the DFS servers hosting the namespace
• By using DFS namespaces and DFS replication together, you
can have both fault tolerance and load balancing on all
namespace servers
• To configure fault tolerance and load balancing:
– Create identical folders on at least two servers and share
them
– Add the folders to an existing replication group or create a
replication group
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SR-IOV (2 of 2)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SMB Direct and SMB
Multichannel (1 of 3)
• SMB Direct – a performance technology designed for the Server
Message Block (SMB) protocol
– Uses the capabilities of an RDMA capable network adapter
– Requires the host network adapter to be RDMA compatible, which
you can check by running the Get-NetAdapterRdma cmdlet
• To enable RDMA on an adapter that supports it, use the following PowerShell cmdlet or enable it in the network adapter’s properties:
– Set-NetAdapterRdma “AdapterName” –Enabled $true
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SMB Direct and SMB
Multichannel (2 of 3)
• SMB Multichannel – provides fault tolerance and improved performance
in a connection between a client and a server providing an SMB share
– Included in SMB 3.0 and works automatically without configuration
• When a connection with a share is established, SMB probes for
additional paths to the server
– If found, the additional paths are used to increase performance and
provide fault tolerance
• View information about the connection using the Get-SmbConnection
cmdlet
– Then see if multichannel can be used by running the following
cmdlet:
Get-SmbMultichannelConnection
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SMB Direct and SMB
Multichannel (3 of 3)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Software-Defined Networking (1 of 2)
• Software-defined networking (SDN) – allows administrators to centrally manage
aspects of key physical and virtual infrastructure devices such as routers,
switches, and access gateways
• Key components of SDN include:
– Hyper-V virtual switches, Hyper-V Network Virtualization (HNV), and
Network Controller
• Three network planes that define the functions of a network device:
– Data plane – the component that processes data as it travels through the
device
– Control plane – the component that determines how the device operates
and how it learns about its network environment
– Management plane – the component that allows the device to be
configured through a user interface by a network administrator, and
management software
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Software-Defined Networking (2 of 2)
• SDN brings the following features and advantages to your
network:
– Manage virtual and physical devices with common tools from
a central location
– Define and deploy system wide control and network security
policies including traffic flows between virtual and physical
networks
– Define granular firewall policies to enhance network security
– Dynamically provision network resources to respond to
changing network conditions
– Enhance network performance
– Reduce infrastructure and network management costs
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SDN Deployment Requirements
• Recommendations for your physical network environment:
– Windows Server 2016 Datacenter edition with Hyper-V must
be installed on one or more servers.
– At least one RDMA-compatible 1 Gbps network interface
card. Two or more 10 Gbps cards are desirable if you wish to
take advantage of advanced features such as VMQ and RSS
– DCB-compatible switches.
– Administrative access to all physical network devices such
as routers and switches
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Hyper-V Network Virtualization
• Hyper-V Network Virtualization (HNV) – provides a virtual network infrastructure
that decouples the virtual network topology from the physical network
– Allows you to move a VM using live migration to a server on a different
subnet without having to change the VM’s address and with not down time
• Benefits of HNV:
– Provides network isolation and flexible placement of workloads without
using VLANs
– Supports cross-subnet live migration
– Maintains existing infrastructure during moves and migrations
– Supports policy-based configuration of virtual networks
– Decouples server and network administration because the physical
network is independent of the VM addressing scheme
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Networks with HNV (1 of 5)
• HNV uses the term routing domain
– A virtual network with one or more subnets that are isolated
from other virtual networks
– Each routing domain is assigned a routing domain ID
(RDID)
• Each routing domain can contain one or more virtual subnets
– Each of which is assigned a virtual subnet ID (VSID)
– A VSID is a unique 24-bit number throughout the datacenter
in the range 4096 to 16,777,214
• Communication between virtual subnets within a single routing
domain is handled by the HNV distributed router
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Networks with HNV (2 of 5)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Networks with HNV (3 of 5)
• To communicate outside the routing domain requires a tunneling
protocol such as Virtual Extensible LAN (VXLAN) or Network
Virtualization using Generic Routing Encapsulation (NVGRE)
• VXLAN and NVGRE are both encapsulation protocols that create
a tunnel through which packets travel
– Hides the addressing scheme of the underlying network
– VXLAN is the default protocol
• The tunneling protocols use the VSID in their protocol headers to
differentiate between virtual subnets
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Networks with HNV (4 of 5)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Virtual Networks with HNV (5 of 5)
• With HNV, tunneling works by associating each virtual network adapter
with two IP addresses:
– Customer address (CA) – the address assigned to VMs by the
customer
– Provider address (PA) – the address used by the host network that
reflects the hosting provider’s physical network addressing scheme
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
HNV with VXLAN
• Virtual Extensible LAN (VXLAN) – a tunneling protocol that
operates over UDP port 4789
– Requires the installation of the Network Controller server role
– Should be used for the widest compatibility with networking
equipment vendors
• The Network Controller server role is only available in Windows
Server 2016
– You can’t use VXLAN if earlier versions of Windows Server
are part of the HVN deployment
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
HNV with NVGRE
• Network Virtualization using Generic Routing Encapsulation
(NVGRE) – a tunneling protocol that uses Generic Routing
Encapsulation (GRE) for the tunnel header
– One of the NVGRE header fields is the GRE Key
▪ Which contains the VSID
– The VSID identifies which virtual subnet the VM belong to
• In Windows Server 2012 R2, you needed one PA per VSID
• In Windows Server 2016, you can use one PA per NIC team
member
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Network Controller
• Network Controller – new server role in Windows Server 2016
– Only available in Datacenter edition
– A necessary component for implementing Software Defined
Networking version 2 (SDNv2)
• With Network Controller
– You can use PowerShell and Microsoft Azure for SDNv2 and
network virtualization management
• Network Controller is installed like any server role
– Using Server Manager or the Install-WindowsFeature
cmdlet
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Software Load Balancing (1 of 2)
• Software load balancing (SLB) – an SDN feature that allows a hosting
provider to distribute tenant network traffic across virtual network
resources to increase virtual network performance and fault tolerance
• A datacenter has two types of network traffic:
– East-West traffic – network traffic between virtual networks and
between servers within the datacenter
– North-South traffic – network traffic traveling between the
datacenter and external clients that need access to services
provided by the datacenter
• SLB on Windows Server 2016 provides both East-West and North-
South network traffic load balancing implemented in Layer 4 (TCP and
UDP) protocols
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Software Load Balancing (2 of 2)
• SLB supports virtual IP address to dynamic IP address mapping
and VLANs on virtual networks
– Virtual IP address (VIP) – an IP address exposed to the
public Internet that clients use to access resources on VMs
in the datacenter
– Dynamic IP address (DIP) – an address dynamically
assigned to a VM that is a member of an SLB pool
• SLB works by mapping VIPs to DIPs
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SLB Components (1 of 2)
• SLB components:
– System Center Virtual Machine Manager (SCVMM) – a system
management product that must be purchased and installed on a Windows
Server 2016 server
– Hyper-V hosts and SDN-enabled virtual switches – Virtual networks
and VMs used for SLB run on Windows Server 2016 Hyper-V hosts
– Network Controller – Processes SLB commands and distributes SLB
policies among Hyper-V hosts and other SLB components
– SLB MUX – maps VIPs to DIPs on incoming East-West traffic and
forwards the traffic to the selected DIP
– SLB Host Agent – interfaces between Hyper-V hosts and Network
Controller to distribute SLB polices and configure Hyper-V virtual switches
for SLB
– BGP router – routes traffic to the SLB MUX
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
SLB Components (2 of 2)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Windows Server Gateways
• To route traffic between virtual networks and the physical network
requires a Windows Server Gateway
• Windows Server Gateway is a software router
– Also known as RAS Gateway
• RAS Gateway implementations:
– Layer 3 forwarder
– GRE tunneling
– Site-to-site VPN
– NAT gateway
• RAS Gateway implementations are multitenant aware
– Means that the gateway software supports multiple isolated virtual
networks
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Distributed Firewall Policies and
Security Groups (1 of 3)
• Distributed firewall policies
– A new feature in Windows Server 2016
– Enable a network administrator to manage firewall policies for all of
a datacenter’s virtual networks
– Implemented using the Datacenter Firewall service
• Datacenter firewall features:
– Scalable software-based firewall solution that a cloud provider can
offer to multitenant virtual networks
– Operating system independent protection of
– Flexible protection of VMs
– Ability of tenants to define firewall rules specific to their
requirements
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Distributed Firewall Policies and
Security Groups (2 of 3)
• Distributed Firewall Manager works within Network Controller
– The Northbound interface is an application programming
interface through which commands and policies are sent to
the SDN controller
– The Southbound interface is the interface through which
policies are sent from the SDN controller to the network
device
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Distributed Firewall Policies and
Security Groups (3 of 3)
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Network Security Groups
• Network Security Group (NSG)
– A named security policy based on access control lists (ACL)
– ACL – a set of rules that define what traffic is allowed to pass through a
network interface
• NSGs can be applied to host NICs attached to VMs, individual VMs, or entire
virtual subnets
• ACL rules have the following properties:
– Rule name
– Protocol
– Source and Destination port
– Source and Destination address
– Direction
– Priority
– Access
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Chapter Summary (1 of 3)
• NIC teaming allows multiple network interfaces to work in tandem to increase
available bandwidth and provide load balancing and fault tolerance
• Three teaming modes are Switch Independent, Static Teaming, or Link
Aggregation Control Protocol (LACP)
• You can configure NIC teaming on VMs as well as on physical computers
• Switch Embedded Teaming (SET) is a new feature that allows up to eight
identical physical adapters on the host system to be configured as a team and
mapped to a virtual switch
• Data center bridging (DCB) is an enhancement to Ethernet that provides
additional features for use in enterprise data centers
• Quality of Service (QoS) in a network allows you to configure priorities for
different types of network traffic
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Chapter Summary (2 of 3)
• Virtual machine queue (VMQ) accelerates virtual NIC (vNIC) performance by
delivering packets from the external network directly to the vNIC
• Receive side scaling (RSS) is a feature for network drivers that efficiently
distributes the processing of incoming network traffic among multiple CPU
cores
• Virtual machine multi-queue (VMMQ) is a new feature that reduces the
overhead in getting packets from the physical network to a VM on the host
• Server Message Block (SMB) is the primary Windows protocol for file sharing
• Software-defined networking (SDN) is a collection of technologies designed to
make the network infrastructure flexible
• SDN is composed of three network planes: the data plane, the control pane,
and the management plane
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.
Chapter Summary (3 of 3)
• Virtual Extensible LAN (VXLAN) is a standard and widely supported
tunneling protocol that allows communication between virtual networks
and the physical network
• Network Controller is a new server role in Windows Server 2016
• Software load balancing (SLB) is an SDN feature that allows a hosting
provider to distribute tenant network traffic across virtual network
resources
• Windows Server Gateway, also known as RAS Gateway, is a software
router that can be deployed in both single-tenant and multitenant
datacenters
• Distributed firewall policies enables a network administrator to manage
firewall policies for all of a datacenter’s virtual networks
© 2018 Cengage. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.