Saturday, October 25, 2014

Guide to Creating and Configuring a Server Cluster under Windows Server 2003 White Paper

http://technet.microsoft.com/en-us/library/cc778252(v=ws.10).aspx

In This White Paper

Introduction (Guide to Creating and Configuring a Server Cluster under Windows Server 2003 White Paper)

8 out of 9 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1
A server cluster is a group of independent servers working collectively and running the Microsoft Cluster service (MSCS). Server clusters provide high availability, failback, scalability, and manageability for resources and applications
Server clusters allow client access to applications and resources in the event of failures and planned outages. If one of the servers in the cluster is unavailable because of a failure or maintenance requirements, resources and applications move to other available cluster nodes.
For Windows Clustering solutions, the term “high availability” is used rather than “fault tolerant.” Fault-tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a Windows Clustering solution because organizations must pay for redundant hardware that waits in an idle state for a fault.
Server clusters do not guarantee non-stop operation, but they do provide sufficient availability for most mission-critical applications. The cluster service can monitor applications and resources and automatically recognize and recover from many failure conditions. This provides flexibility in managing the workload within a cluster. It also improves overall system availability.
Cluster service benefits include:
  • High Availability: With server clusters, ownership of resources such as disk drives and Internet protocol (IP) addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
  • Failback: The Cluster service will automatically re-assign the workload in a cluster when a failed server comes back online to its predetermined preferred owner. This feature can be configured, but is disabled by default.
  • Manageability: You can use the Cluster Administrator tool (CluAdmin.exe) to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster. Cluster Administrator can be used to manually balance server workloads and to free servers for planned maintenance. You can also monitor the status of the cluster, all nodes, and resources from anywhere on the network.
  • Scalability: Cluster services can grow to meet increased demand. When the overall load for a cluster-aware application exceeds the cluster’s capabilities, additional nodes can be added.
This document provides instructions for creating and configuring a server cluster with servers connected to a shared cluster storage device and running Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition. Intended to guide you through the process of installing a typical cluster, this document does not explain how to install clustered applications. Windows Clustering solutions that implement non-traditional quorum models, such as Majority Node Set (MNS) clusters and geographically dispersed clusters, also are not discussed. For additional information about server cluster concepts as well as installation and configuration procedures, see the Windows Server 2003 Online Help.

Checklists for Server Cluster Configuration

1 out of 1 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1
This checklist helps you prepare for installation. Step-by-step instructions begin after the checklist.

Software Requirements

  • Microsoft Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition installed on all computers in the cluster.
  • A name resolution method such as Domain Name System (DNS), DNS dynamic update protocol, Windows Internet Name Service (WINS), HOSTS, and so on.
  • An existing domain model.
  • All nodes must be members of the same domain.
  • A domain-level account that is a member of the local administrators group on each node. A dedicated account is recommended.

Hardware Requirements

  • Clustering hardware must be on the cluster service Hardware Compatibility List (HCL). To find the latest version of the cluster service HCL, go to the Windows Hardware Compatibility List athttp://www.microsoft.com/whdc/hcl/default.mspx, and then search for cluster. The entire solution must be certified on the HCL, not just the individual components. For additional information, see the following article in the Microsoft Knowledge Base:

    309395 The Microsoft Support Policy for Server Clusters and the Hardware

    noteNote
    If you are installing this cluster on a storage area network (SAN) and plan to have multiple devices and clusters sharing the SAN with a cluster, the solution must also be on the “Cluster/Multi-Cluster Device” Hardware Compatibility List. For additional information, see the following article in the Microsoft Knowledge Base: 304415 Support for Multiple Clusters Attached to the Same SAN Device
  • Two mass storage device controllers—Small Computer System Interface (SCSI) or Fibre Channel. A local system disk for the operating system (OS) to be installed on one controller. A separate peripheral component interconnect (PCI) storage controller for the shared disks.
  • Two PCI network adapters on each node in the cluster.
  • Storage cables to attach the shared storage device to all computers. Refer to the manufacturers instructions for configuring storage devices. See the appendix that accompanies this article for additional information about specific configuration needs when using SCSI or Fibre Channel.
  • All hardware should be identical, slot for slot, card for card, BIOS, firmware revisions, and so on, for all nodes. This makes configuration easier and eliminates compatibility problems.

Network Requirements

  • A unique NetBIOS name.
  • Static IP addresses for all network interfaces on each node.

    noteNote
    Server Clustering does not support the use of IP addresses assigned from Dynamic Host Configuration Protocol (DHCP) servers.
  • Access to a domain controller. If the cluster service is unable to authenticate the user account used to start the service, it could cause the cluster to fail. It is recommended that you have a domain controller on the same local area network (LAN) as the cluster is on to ensure availability.
  • Each node must have at least two network adapters—one for connection to the client public network and the other for the node-to-node private cluster network. A dedicated private network adapter is required for HCL certification.
  • All nodes must have two physically independent LANs or virtual LANs for public and private communication.
  • If you are using fault-tolerant network cards or network adapter teaming, verify that you are using the most recent firmware and drivers. Check with your network adapter manufacturer for cluster compatibility.

Shared Disk Requirements:

  • An HCL-approved external disk storage unit connected to all computers. This will be used as the clustered shared disk. Some type of a hardware redundant array of independent disks (RAID) is recommended.
  • All shared disks, including the quorum disk, must be physically attached to a shared bus.

    noteNote
    The requirement above does not hold true for Majority Node Set (MNS) clusters, which are not covered in this guide.
  • Shared disks must be on a different controller then the one used by the system drive.
  • Creating multiple logical drives at the hardware level in the RAID configuration is recommended rather than using a single logical disk that is then divided into multiple partitions at the operating system level. This is different from the configuration commonly used for stand-alone servers. However, it enables you to have multiple disk resources and to do Active/Active configurations and manual load balancing across the nodes in the cluster.
  • A dedicated disk with a minimum size of 50 megabytes (MB) to use as the quorum device. A partition of at least 500 MB is recommended for optimal NTFS file system performance.
  • Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Refer to the manufacturer’s documentation for adapter-specific instructions.
  • SCSI devices must be assigned unique SCSI identification numbers and properly terminated according to the manufacturer’s instructions. See the appendix with this article for information about installing and terminating SCSI devices.
  • All shared disks must be configured as basic disks. For additional information, see the following article in the Microsoft Knowledge Base:

    237853 Dynamic Disk Configuration Unavailable for Server Cluster Disk Resources
  • Software fault tolerance is not natively supported on cluster shared disks.
  • All shared disks must be configured as master boot record (MBR) disks on systems running the 64-bit versions of Windows Server 2003.
  • All partitions on the clustered disks must be formatted as NTFS.
  • Hardware fault-tolerant RAID configurations are recommended for all disks.
  • A minimum of two logical shared drives is recommended.

Cluster Installation

8 out of 9 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1

Installation Overview

During the installation process, some nodes will be shut down while others are being installed. This step helps guarantee that data on disks attached to the shared bus is not lost or corrupted. This can happen when multiple nodes simultaneously try to write to a disk that is not protected by the cluster software. The default behavior of how new disks are mounted has been changed in Windows 2003 Server from the behavior in the Microsoft® Windows® 2000 operating system. In Windows 2003, logical disks that are not on the same bus as the boot partition will not be automatically mounted and assigned a drive letter. This helps ensure that the server will not mount drives that could possibly belong to another server in a complex SAN environment. Although the drives will not be mounted, it is still recommended that you follow the procedures below to be certain the shared disks will not become corrupted.
Use the table below to determine which nodes and storage devices should be turned on during each step.
The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, the Node 2 column lists the required state of all other nodes.

 

StepNode 1Node 2StorageComments
Setting up networks
On
On
Off
Verify that all storage devices on the shared bus are turned off. Turn on all nodes.
Setting up shared disks
On
Off
On
Shutdown all nodes. Turn on the shared storage, then turn on the first node.
Verifying disk configuration
Off
On
On
Turn on the first node, turn on second node. Repeat for nodes 3 and 4 if necessary.
Configuring the first node
On
Off
On
Turn off all nodes; turn on the first node.
Configuring the second node
On
On
On
Turn on the second node after the first node is successfully configured. Repeat for nodes 3 and 4 as necessary.
Post-installation
On
On
On
All nodes should be on.
Several steps must be taken before configuring the Cluster service software. These steps are:
  • Installing Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition operating system on each node.
  • Setting up networks.
  • Setting up disks.
Perform these steps on each cluster node before proceeding with the installation of cluster service on the first node.
To configure the cluster service, you must be logged on with an account that has administrative permissions to all nodes. Each node must be a member of the same domain. If you choose to make one of the nodes a domain controller, have another domain controller available on the same subnet to eliminate a single point of failure and enable maintenance on that node.

Installing the Windows Server 2003 Operating System

Refer to the documentation you received with the Windows Server 2003 operating system package to install the system on each node in the cluster.
Before configuring the cluster service, you must be logged on locally with a domain account that is a member of the local administrators group.
noteNote
The installation will fail if you attempt to join a node to a cluster that has a blank password for the local administrator account. For security reasons, Windows Server 2003 prohibits blank administrator passwords.

Setting Up Networks

Each cluster node requires at least two network adapters with two or more independent networks, to avoid a single point of failure. One is to connect to a public network, and one is to connect to a private network consisting of cluster nodes only. Servers with multiple network adapters are referred to as “multi-homed.” Because multi-homed servers can be problematic, it is critical that you follow the network configuration recommendations outlined in this document.
Microsoft requires that you have two Peripheral Component Interconnect (PCI) network adapters in each node to be certified on the Hardware Compatibility List (HCL) and supported by Microsoft Product Support Services. Configure one of the network adapters on your production network with a static IP address, and configure the other network adapter on a separate network with another static IP address on a different subnet for private cluster communication.
Communication between server cluster nodes is critical for smooth cluster operations. Therefore, you must configure the networks that you use for cluster communication are configured optimally and follow all hardware compatibility list requirements.
The private network adapter is used for node-to-node communication, cluster status information, and cluster management. Each node’s public network adapter connects the cluster to the public network where clients reside and should be configured as a backup route for internal cluster communication. To do so, configure the roles of these networks as either "Internal Cluster Communications Only" or "All Communications" for the Cluster service.
Additionally, each cluster network must fail independently of all other cluster networks. This means that two cluster networks must not have a component in common that can cause both to fail simultaneously. For example, the use of a multiport network adapter to attach a node to two cluster networks would not satisfy this requirement in most cases because the ports are not independent.
To eliminate possible communication issues, remove all unnecessary network traffic from the network adapter that is set to Internal Cluster communications only (this adapter is also known as the heartbeat or private network adapter).
To verify that all network connections are correct, private network adapters must be on a network that is on a different logical network from the public adapters. This can be accomplished by using a cross-over cable in a two-node configuration or a dedicated hub (not a smart hub) in a configuration of more than two nodes. Do not use a switch, smart hub, or any other routing device for the heartbeat network.
noteNote
Cluster heartbeats cannot be forwarded through a routing device because their Time to Live (TTL) is set to 1. The public network adapters must be only connected to the public network. If you have a virtual LAN, then the latency between the nodes must be less then 500 milliseconds (ms). Also, in Windows Server 2003, heartbeats in Server Clustering have been changed to multicast; therefore, you may want to make a Madcap server available to assign the multicast addresses. For additional information, see the following article in the Microsoft Knowledge Base: 307962 Multicast Support Enabled for the Cluster Heartbeat.
Figure 1 below outlines a four-node cluster configuration.
f4ecb036-76af-49f9-930a-47377425ec78
Figure 1. Connections for a four-node cluster.

General Network Configuration

noteNote
This guide assumes that you are running the default Start menu. The steps may be slightly different if you are running the Classic Start menu. Also, which network adapter is private and which is public depends upon your wiring. For the purposes of this white paper, the first network adapter (Local Area Connection) is connected to the public network, and the second network adapter (Local Area Connection 2) is connected to the private cluster network. Your network may be different.

To rename the local area network icons

It is recommended that you change the names of the network connections for clarity. For example, you might want to change the name of Local Area Connection 2 to something like Private. Renaming will help you identify a network and correctly assign its role.
  1. Click Start, point to Control Panel, right-click Network Connections, and then click Open
  2. Right-click the Local Area Connection 2 icon.
  3. Click Rename.
  4. Type Private in the textbox, and then press ENTER.
  5. Repeat steps 1 through 3, and then rename the public network adapter as Public.

    3838f3c0-9266-4ca2-98a2-0f9416a92f97Figure 2. Renamed icons in the Network Connections window.
  6. The renamed icons should look like those in Figure 2 above. Close the Network Connections window. The new connection names will appear in Cluster Administrator and automatically replicate to all other cluster nodes as they are brought online.

To configure the binding order networks on all nodes

  1. Click Start, point to Control Panel, right-click Network Connections, and then click Open
  2. On the Advanced menu, click Advanced Settings.
  3. In the Connections box, make sure that your bindings are in the following order, and then click OK:

    1. Public
    2. Private
    3. Remote Access Connections

Configuring the Private Network Adapter

  1. Right-click the network connection for your heartbeat adapter, and then click Properties.
  2. On the General tab, make sure that only the Internet Protocol (TCP/IP) check box is selected, as shown in Figure 3 below. Click to clear the check boxes for all other clients, services, and protocols.

    49ec850d-6731-4bc2-a880-7bab79d397e5Figure 3. Click to select only the Internet Protocol check box in the Private Properties dialog box.
  3. If you have a network adapter that is capable of transmitting at multiple speeds, you should manually specify a speed and duplex mode. Do not use an auto-select setting for speed, because some adapters may drop packets while determining the speed. The speed for the network adapters must be hard set (manually set) to be the same on all nodes according to the card manufacturer's specification. If you are not sure of the supported speed of your card and connecting devices, Microsoft recommends you set all devices on that path to 10 megabits per second (Mbps) and Half Duplex, as shown in Figure 4 below. The amount of information that is traveling across the heartbeat network is small, but latency is critical for communication. This configuration will provide enough bandwidth for reliable communication. All network adapters in a cluster attached to the same network must be configured identically to use the same Duplex ModeLink Speed, Flow Control, and so on. Contact your adapter's manufacturer for specific information about appropriate speed and duplex settings for your network adapters.

    e6310806-b724-4aaa-9655-07c7207b4dafFigure 4. Setting the speed and duplex for all adapters

     

    noteNote
    Microsoft does not recommended that you use any type of fault-tolerant adapter or "Teaming" for the heartbeat. If you require redundancy for your heartbeat connection, use multiple network adapters set to Internal Communication Only and define their network priority in the Cluster configuration. Issues seen with early multi-ported network adapters, verify that your firmware and driver are at the most current revision if you use this technology. Contact your network adapter manufacturer for information about compatibility on a server cluster. For more information, see the following article in the Microsoft Knowledge Base: 254101 Network Adapter Teaming and Server Clustering.
  4. Click Internet Protocol (TCP/IP), and then click Properties.
  5. On the General tab, verify that you have selected a static IP address that is not on the same subnet or network as any other public network adapter. It is recommended that you put the private network adapter in one of the following private network ranges:

    • 10.0.0.0 through 10.255.255.255 (Class A)
    • 172.16.0.0 through 172.31.255.255 (Class B)
    • 192.168.0.0 through 192.168.255.255 (Class C)
    An example of a good IP address to use for the private adapters is 10.10.10.10 on node 1 and 10.10.10.11 on node 2 with a subnet mask of 255.0.0.0, as shown in Figure 5 below. Be sure that this is a completely different IP address scheme then the one used for the public network.

     

    noteNote
    For more information about valid IP addressing for a private network, see the following article in the Microsoft Knowledge Base: 142863 Valid IP Addressing for a Private Network.
    fc2e28bc-f937-411f-83f0-a76fb56b5b4aFigure 5. An example of an IP address to use for private adapters.
  6. Verify that there are no values defined in the Default Gateway box or under Use the Following DNS server addresses.
  7. Click the Advanced button.
  8. On the DNS tab, verify that no values are defined. Make sure that the Register this connection's addresses in DNS and Use this connection's DNS suffix in DNS registration check boxes are cleared.
  9. On the WINS tab, verify that there are no values defined. Click Disable NetBIOS over TCP/IP as shown in Figure 6 on the next page.

    3cadd47b-7b1f-4d65-a1b9-1a7ccbb5cfd9Figure 6. Verify that no values are defined on the WINS tab. 
  10. When you close the dialog box, you may receive the following prompt: “This connection has an empty primary WINS address. Do you want to continue?” If you receive this prompt, click Yes
  11. Complete steps 1 through 10 on all other nodes in the cluster with different static IP addresses.

Configuring the Public Network Adapter

noteNote
If IP addresses are obtained via DHCP, access to cluster nodes may be unavailable if the DHCP server is inaccessible. For this reason, static IP addresses are required for all interfaces on a server cluster. Keep in mind that cluster service will only recognize one network interface per subnet. If you need assistance with TCP/IP addressing in Windows Server 2003, please see the Online Help.

Verifying Connectivity and Name Resolution

To verify that the private and public networks are communicating properly, ping all IP addresses from each node. You should be able to ping all IP addresses, locally and on the remote nodes.
To verify name resolution, ping each node from a client using the node’s machine name instead of its IP address. It should only return the IP address for the public network. You may also want to try a PING –a command to do a reverse lookup on the IP Addresses.

Verifying Domain Membership

All nodes in the cluster must be members of the same domain and be able to access a domain controller and a DNS server. They can be configured as member servers or domain controllers. You should have at least one domain controller on the same network segment as the cluster. For high availability. another domain controller should also be available to remove a single point of failure. In this guide, all nodes are configured as member servers.
There are instances where the nodes may be deployed in an environment where there are no pre-existing Microsoft® Windows NT® 4.0 domain controllers or Windows Server 2003 domain controllers. This scenario requires at least one of the cluster nodes to be configured as a domain controller. However, in a two-node server cluster, if one node is a domain controller, then the other node also must be a domain controller. In a four-node cluster implementation, it is not necessary to configure all four nodes as domain controllers. However, when following a “best practices” model and having at least one backup domain controller, at least one of the remaining three nodes should be configured as a domain controller. A cluster node must be promoted to a domain controller by using the DCPromo tool before the cluster service is configured.
The dependence in Windows Server 2003 on the DNS further requires that every node that is a domain controller also must be a DNS server if another DNS server that supports dynamic updates and/or SRV records is not available (Active directory integrated zones recommended).
The following issues should be considered when deploying cluster nodes as domain controllers:
  • If one cluster node in a two-node cluster is a domain controller, the other node must be a domain controller
  • There is overhead associated with running a domain controller. An idle domain controller can use anywhere between 130 and 140 MB of RAM, which includes having the Cluster service running. There is also increased network traffic from replication, because these domain controllers have to replicate with other domain controllers in the domain and across domains.
  • If the cluster nodes are the only domain controllers, then each must be a DNS server as well. They should point to each other for primary DNS resolution and to themselves for secondary resolution.
  • The first domain controller in the forest/domain will take on all Operations Master Roles. You can redistribute these roles to any node. However, if a node fails, the Operations Master Roles assumed by that node will be unavailable. Therefore, it is recommended that you do not run Operations Master Roles on any cluster node. This includes Scheme Master, Domain Naming Master, Relative ID Master, PDC Emulator, and Infrastructure Master. These functions cannot be clustered for high availability with failover.
  • Clustering other applications such as Microsoft® SQL Server ™ or Microsoft® Exchange Server in a scenario where the nodes are also domain controllers may not be optimal due to resource constraints. This configuration should be thoroughly tested in a lab environment before deployment
Because of the complexity and overhead involved in making cluster-nodes domain controllers, it is recommended that all nodes should be member servers.

Setting Up a Cluster User Account

The Cluster service requires a domain user account that is a member of the Local Administrators group on each node, under which the Cluster service can run. Because setup requires a user name and password, this user account must be created before configuring the Cluster service. This user account should be dedicated only to running the Cluster service, and should not belong to an individual.
noteNote
The cluster service account does not need to be a member of the Domain Administrators group. For security reasons, granting domain administrator rights to the cluster service account is not recommended.
The cluster service account requires the following rights to function properly on all nodes in the cluster. The Cluster Configuration Wizard grants the following rights automatically:
  • Act as part of the operating system
  • Adjust memory quotas for a process
  • Back up files and directories
  • Increase scheduling priority
  • Log on as a service
  • Restore files and directories
For additional information, see the following article in the Microsoft Knowledge Base:
269229 How to Manually Re-Create the Cluster Service Account

To set up a cluster user account

  1. Click Start, point to All Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
  2. Click the plus sign (+) to expand the domain if it is not already expanded.
  3. Right-click Users, point to New, and then click User.
  4. Type the cluster name, as shown in Figure 7 below, and then click Next.

    01a30e4e-95a1-419a-be8a-4e2dcd8a9b09Figure 7. Type the cluster name.
  5. Set the password settings to User Cannot Change Password and Password Never Expires. Click Next, and then click Finish to create this user.

     

    noteNote
    If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration. For additional information, see the following article in the Microsoft Knowledge Base:305813 How to Change the Cluster Service Account Password.
  6. Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in, and then clickProperties on the shortcut menu.
  7. Click Add Members to a Group.
  8. Click Administrators, and then click OK. This gives the new user account administrative privileges on this computer.
  9. Quit the Active Directory Users and Computers snap-in.

Setting up Shared Disks

ImportantImportant
To avoid corrupting the cluster disks, make sure that Windows Server 2003 and the Cluster service are installed, configured, and running on at least one node before you start an operating system on another node. It is critical to never have more then one node on until the Cluster service is configured.
To proceed, turn off all nodes. Turn on the shared storage devices, and then turn on node 1.

About the Quorum Disk

The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster and maintain consistency. The following quorum disk procedures are recommended:
  • Create a logical drive with a minimum size of 50 MB to be used as a quorum disk, 500 MB is optimal for NTFS.
  • Dedicate a separate disk as a quorum resource.

     

    ImportantImportant
    A quorum disk failure could cause the entire cluster to fail; therefore, it is strongly recommended that you use a volume on a hardware RAID array. Do not use the quorum disk for anything other than cluster management.
    The quorum resource plays a crucial role in the operation of the cluster. In every cluster, a single resource is designated as the quorum resource. A quorum resource can be any Physical Disk resource with the following functionality:
  • It replicates the cluster registry to all other nodes in the server cluster. By default, the cluster registry is stored in the following location on each node: %SystemRoot%\Cluster\Clusdb. The cluster registry is then replicated to the MSCS\Chkxxx.tmp file on the quorum drive. These files are exact copies of each other. TheMSCS\Quolog.log file is a transaction log that maintains a record of all changes to the checkpoint file. This means that nodes that were offline can have these changes appended when they rejoin the cluster.
  • If there is a loss of communication between cluster nodes, the challenge response protocol is initiated to prevent a "split brain" scenario. In this situation, the owner of the quorum disk resource becomes the only owner of the cluster and all the resources. The owner then makes the resources available for clients. When the node that owns the quorum disk functions incorrectly, the surviving nodes arbitrate to take ownership of the device. For additional information, see the following article in the Microsoft Knowledge Base: 309186How the Cluster Service Takes Ownership of a Disk on the Shared Bus.
During the cluster service installation, you must provide the drive letter for the quorum disk. The letter Q is commonly used as a standard, and Q is used in the example.

To configure shared disks

  1. Make sure that only one node is turned on.
  2. Right click My Computer, click Manage, and then expand Storage.
  3. Double-click Disk Management.
  4. If you connect a new drive, then it automatically starts the Write Signature and Upgrade Disk Wizard. If this happens, click Next to step through the wizard.

     

    noteNote
    The wizard automatically sets the disk to dynamic. To reset the disk to basic, right-click Disk (wheren specifies the disk that you are working with), and then click Revert to Basic Disk.
  5. Right-click unallocated disk space.
  6. Click New Partition.
  7. The New Partition Wizard begins. Click Next.
  8. Select the Primary Partition partition type. Click Next.
  9. The default is set to maximum size for the partition size. Click Next. (Multiple logical disks are recommended over multiple partitions on one disk.)
  10. Use the drop-down box to change the drive letter. Use a drive letter that is farther down the alphabet than the default enumerated letters. Commonly, the drive letter Q is used for the quorum disk, then R, S,and so on for the data disks. For additional information, see the following article in the Microsoft Knowledge Base:

    318534 Best Practices for Drive-Letter Assignments on a Server Cluster

     

    noteNote
    If you are planning on using volume mount points, do not assign a drive letter to the disk. For additional information, see the following article in the Microsoft Knowledge Base: 280297 How to Configure Volume Mount Points on a Clustered Server.
  11. Format the partition using NTFS. In the Volume Label box, type a name for the disk. For example, Drive Q, as shown in Figure 8 below. It is critical to assign drive labels for shared disks, because this can dramatically reduce troubleshooting time in the event of a disk recovery situation.

    d8ea2056-d6fd-4875-b26e-0d7f6cc86918Figure 8. It is critical to assign drive labels for shared disks.
If you are installing a 64-bit version of Windows Server 2003, verify that all disks are formatted as MBR. Global Partition Table (GPT) disks are not supported as clustered disks. For additional information, see the following article in the Microsoft Knowledge Base:
284134 Server Clusters Do Not Support GPT Shared Disks
Verify that all shared disks are formatted as NTFS and designated as MBR Basic.

To verify disk access and functionality

  1. Start Windows Explorer.
  2. Right-click one of the shared disks (such as Drive Q:\), click New, and then click Text Document.
  3. Verify that you can successfully write to the disk and that the file was created.
  4. Select the file, and then press the Del key to delete it from the clustered disk.
  5. Repeat steps 1 through 4 for all clustered disks to verify they can be correctly accessed from the first node.
  6. Turn off the first node, turn on the second node, and repeat steps 1 through 4 to verify disk access and functionality. Assign drive letters to match the corresponding drive labels. Repeat again for any additional nodes. Verify that all nodes can read and write from the disks, turn off all nodes except the first one, and then continue with this white paper.

Configuring the Cluster Service

5 out of 6 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1
You must supply all initial cluster configuration information in the first installation phase. This is accomplished using the Cluster Configuration Wizard.
As seen in the flow chart, the form (Create a new Cluster) and the Join (Add nodes) take a couple different paths, but they have a few of the same pages. Namely, Credential Login, Analyze, and Re-Analyze and Start Service are the same. There are minor differences in the following pages: Welcome, Select Computer, and Cluster Service Account. In the next two sections of this lesson, you will step through the wizard pages presented on each of these configuration paths. In the third section, after you follow the step-through sections, this white paper describes in detail the Analyze, ,Re-Analyze and Start Service pages, and what the information provided in these screens means.
06488bcb-bccd-45c9-a1de-e8cabb2ac474
noteNote
During Cluster service configuration on node 1, you must turn off all other nodes. All shared storage devices should be turned on.

To configure the first node

  1. Click Start, click All Programs, click Administrative Tools, and then click Cluster Administrator.
  2. When prompted by the Open Connection to Cluster Wizard, click Create new cluster in the Action drop-down list, as shown in Figure 9 below.

    ca3ca580-6935-4348-b402-28c9cc9ce19eFigure 9. The Action drop-down list.
  3. Verify that you have the necessary prerequisites to configure the cluster, as shown in Figure 10 below. ClickNext.

    5c94847c-0cfd-48a9-835b-2af8b02e8650Figure 10. A list of prerequisites is part of the New Server Cluster Wizard Welcome page.
  4. Type a unique NetBIOS name for the cluster (up to 15 characters), and then click Next. In the example shown in Figure 11 below, the cluster is named MyCluster.) Adherence to DNS naming rules is recommended. For additional information, see the following articles in the Microsoft Knowledge Base:

    163409 NetBIOS Suffixes (16th Character of the NetBIOS Name)

    254680 DNS Namespace Planning

    007c9306-2085-4650-9d12-e1c791038f44Figure 11. Adherence to DNS naming rules is recommended when naming the cluster.
  5. If you are logged on locally with an account that is not a Domain Account with Local Administrative privileges, the wizard will prompt you to specify an account. This is not the account the Cluster service will use to start.

     

    noteNote
    If you have appropriate credentials, the prompt mentioned in step 5 and shown in Figure 12 below may not appear.
    b4178139-d618-4432-8bce-84e332265f0dFigure 12. The New Server Cluster Wizard prompts you to specify an account.
  6. Because it is possible to configure clusters remotely, you must verify or type the name of the server that is going to be used as the first node to create the cluster, as shown in Figure 13 below. Click Next.

    02226898-6244-4c2a-82e7-1c85c29849bcFigure 13. Select the name of the computer that will be the first node in the cluster.

     

    noteNote
    The Install wizard verifies that all nodes can see the shared disks the same. In a complex storage area network the target identifiers (TIDs) for the disks may sometimes be different, and the Setup program may incorrectly detect that the disk configuration is not valid for Setup. To work around this issue you can click the Advanced button, and then click Advanced (minimum) configuration. For additional information, see the following article in the Microsoft Knowledge Base: 331801 Cluster Setup May Not Work When You Add Nodes
    157a5764-17b3-4baa-992e-e7d13f023255
  7. Figure 14 below illustrates that the Setup process will now analyze the node for possible hardware or software problems that may cause problems with the installation. Review any warnings or error messages. You can also click the Details button to get detailed information about each one.

    437b9ea3-5d25-46d0-8b9f-fcddb8e4323cFigure 14. The Setup process analyzes the node for possible hardware or software problems.
  8. Type the unique cluster IP address (in this example 172.26.204.10), and then click Next.

    As shown in Figure 15 below, the New Server Cluster Wizard automatically associates the cluster IP address with one of the public networks by using the subnet mask to select the correct network. The cluster IP address should be used for administrative purposes only, and not for client connections.

    12440cda-e085-479c-a2ba-0f5b468837e3Figure 15. The New Server Cluster Wizard automatically associates the cluster IP address with one of the public networks.
  9. Type the user name and password of the cluster service account that was created during pre-installation. (In the example in Figure 16 below, the user name is “Cluster”). Select the domain name in the Domaindrop-down list, and then click Next.

    At this point, the Cluster Configuration Wizard validates the user account and password.

    13120be5-e1e4-4161-8a8f-96bc4325aa5fFigure 16. The wizard prompts you to provide the account that was created during pre-installation.
  10. Review the Summary page, shown in Figure 17 below, to verify that all the information that is about to be used to create the cluster is correct. If desired, you can use the quorum button to change the quorum disk designation from the default auto-selected disk.

    The summary information displayed on this screen can be used to reconfigure the cluster in the event of a disaster recovery situation. It is recommended that you save and print a hard copy to keep with the change management log at the server.

     

    noteNote
    The Quorum button can also be used to specify a Majority Node Set (MNS) quorum model. This is one of the major configuration differences when you create an MNS cluster
    7a1d8f0a-4b75-43e5-8420-a9a09c57f618Figure 17. The Proposed Cluster Configuration page.
  11. Review any warnings or errors encountered during cluster creation. To do this, click the plus signs to see more, and then click Next. Warnings and errors appear in the Creating the Cluster page as shown in Figure 18.

    cfcc3bcf-b6e5-4ef9-8d54-4706f3fb057fFigure 18. Warnings and errors appear on the Creating the Cluster page.
  12. Click Finish to complete the installation. Figure 19 below illustrates the final step.

    e21dde23-c772-40b9-9bea-198a3792eb43Figure 19. The final step in setting up a new server cluster.
noteNote
To view a detailed summary, click the View Log button or view the text file stored in the following location:
%SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log

Validating the Cluster Installation

Use the Cluster Administrator (CluAdmin.exe) to validate the cluster service installation on node 1.

To validate the cluster installation

  1. Click Start, click Programs, click Administrative Tools, and then click Cluster Administrator.
  2. Verify that all resources came online successfully, as shown in Figure 20 below.

    2de1567e-ffb0-45e7-bee8-79dec484675dFigure 20. The Cluster Administer verifies that all resources came online successfully.
noteNote
As general rules, do not put anything in the cluster group, do not take anything out of the cluster group, and do not use anything in the cluster group for anything other than cluster administration.

Configuring the Second Node

Installing the cluster service on the other nodes requires less time than on the first node. Setup configures the cluster service network settings on the second node based on the configuration of the first node. You can also add multiple nodes to the cluster at the same time, and remotely.
noteNote
For this section, leave node 1 and all shared disks turned on. Then turn on all other nodes. The cluster service will control access to the shared disks at this point to eliminate any chance of corrupting the volume.
  1. Open Cluster Administrator on Node 1.
  2. Click File, click New, and then click Node.
  3. The Add Cluster Computers Wizard will start. Click Next.
  4. If you are not logged on with appropriate credentials, you will be asked to specify a domain account that has administrative rights over all nodes in the cluster.
  5. Enter the machine name for the node you want to add to the cluster. Click Add. Repeat this step, shown in Figure 21 below, to add all other nodes that you want. When you have added all nodes, click Next.

    91360aed-53d5-4909-b3cc-68f586fdfb12Figure 21. Adding nodes to the cluster.
  6. The Setup wizard will perform an analysis of all the nodes to verify that they are configured properly.
  7. Type the password for the account used to start the cluster service.
  8. Review the summary information that is displayed for accuracy. The summary information will be used to configure the other nodes when they join the cluster.
  9. Review any warnings or errors encountered during cluster creation, and then click Next.
  10. Click Finish to complete the installation.

Post-Installation Configuration

2 out of 2 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1

Heartbeat Configuration

Now that the networks have been configured correctly on each node and the Cluster service has been configured, you need to configure the network roles to define their functionality within the cluster. Here is a list of the network configuration options in Cluster Administrator:
  • Enable for cluster use: If this check box is selected, the cluster service uses this network. This check box is selected by default for all networks.
  • Client access only (public network): Select this option if you want the cluster service to use this network adapter only for external communication with other clients. No node-to-node communication will take place on this network adapter.
  • Internal cluster communications only (private network): Select this option if you want the cluster service to use this network only for node-to-node communication.
  • All communications (mixed network): Select this option if you want the cluster service to use the network adapter for node-to-node communication and for communication with external clients. This option is selected by default for all networks.
This white paper assumes that only two networks are in use. It explains how to configure these networks as one mixed network and one private network. This is the most common configuration. If you have available resources, two dedicated redundant networks for internal-only cluster communication are recommended.

To configure the heartbeat

  1. Start Cluster Administrator.
  2. In the left pane, click Cluster Configuration, click Networks, right-click Private, and then click Properties.
  3. Click Internal cluster communications only (private network), as shown in Figure 22 on the next page.

    a633a4c0-7186-4e86-9ee6-cd73ffad1ffbFigure 22. Using Cluster Administrator to configure the heartbeat.
  4. Click OK.
  5. Right-click Public, and then click Properties (shown in Figure 23 below).
  6. Click to select the Enable this network for cluster use check box.
  7. Click the All communications (mixed network) option, and then click OK.

    2fcb51ac-0829-41ba-860b-b4a138401408Figure 23. The Public Properties dialog box.

Heartbeat Adapter Prioritization

After configuring the role of how the cluster service will use the network adapters, the next step is to prioritize the order in which they will be used for intra-cluster communication. This is applicable only if two or more networks were configured for node-to-node communication. Priority arrows on the right side of the screen specify the order in which the cluster service will use the network adapters for communication between nodes. The cluster service always attempts to use the first network adapter listed for remote procedure call (RPC) communication between the nodes. Cluster service uses the next network adapter in the list only if it cannot communicate by using the first network adapter.
  1. Start Cluster Administrator.
  2. In the left pane, right-click the cluster name (in the upper left corner), and then click Properties.
  3. Click the Network Priority tab, as shown in Figure 24 below.

    fa9da46a-3157-4f45-b0c8-88792e06ff79Figure 24. The Network Priority tab in Cluster Administrator.
  4. Verify that the Private network is listed at the top. Use the Move Up or Move Down buttons to change the priority order.
  5. Click OK.

Configuring Cluster Disks

  • Start Cluster Administrator, right-click any disks that you want to remove from the cluster, and then clickDelete.
noteNote
By default, all disks not residing on the same bus as the system disk will have Physical Disk Resources created for them, and will be clustered. Therefore, if the node has multiple buses, some disks may be listed that will not be used as shared storage, for example, an internal SCSI drive. Such disks should be removed from the cluster configuration. If you plan to implement Volume Mount points for some disks, you may want to delete the current disk resources for those disks, delete the drive letters, and then create a new disk resource without a drive letter assignment.

Quorum Disk Configuration

The Cluster Configuration Wizard automatically selects the drive that is to be used as the quorum device. It will use the smallest partition that is larger then 50 MB. You may want to change the automatically selected disk to a dedicated disk that you have designated for use as the quorum.

To configure the quorum disk

  1. Start Cluster Administrator (CluAdmin.exe).
  2. Right-click the cluster name in the upper-left corner, and then click Properties.
  3. Click the Quorum tab.
  4. In the Quorum resource list box, select a different disk resource. In Figure 25 below, Disk Q is selected in the Quorum resource list box.

    2f187308-2634-4039-bae8-57b368556628Figure 25. The Quorum resource list box.
  5. If the disk has more than one partition, click the partition where you want the cluster-specific data to be kept, and then click OK.
For additional information, see the following article in the Microsoft Knowledge Base:
Q280353 How to Change Quorum Disk Designation

Creating a Boot Delay

In a situation where all the cluster nodes boot up and attempt to attach to the quorum resource at the same time, the Cluster service may fail to start. For example, this may occur when power is restored to all nodes at the exact same time after a power failure. To avoid such a situation, increase or decrease the Time to Display list of operating systems setting. To find this setting, click Start, point to My Computer, right-click My Computer, and then click Properties. Click the Advanced tab, and then click Settings under Startup And Recovery.

Test Installation

1 out of 1 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1
There are several methods for verifying a cluster service installation after the Setup process is complete. These include:
  • Cluster Administrator: If installation was completed only on node 1, start Cluster Administrator, and then attempt to connect to the cluster. If a second node was installed, start Cluster Administrator on either node, connect to the cluster, and then verify that the second node is listed.
  • Services Applet: Use the services snap-in to verify that the cluster service is listed and started.
  • Event Log: Use the Event Viewer to check for ClusSvc entries in the system log. You should see entries confirming that the cluster service successfully formed or joined a cluster.
  • Cluster service registry entries: Verify that the cluster service installation process wrote the correct entries to the registry. You can find many of the registry settings under HKEY_LOCAL_MACHINE\Cluster
  • Click Start, click Run, and then type the Virtual Server name. Verify that you can connect and see resources.

Test Failover

To verify that resources will failover

  1. Click Start, click Programs, click Administrative Tools, and then click Cluster Administrator, as shown in Figure 26 below.

    1a9b5063-2387-489b-97e3-cff57b09db12Figure 26. The Cluster Administrator window.
  2. Right-click the Disk Group 1 group, and then click Move Group. The group and all its resources will be moved to another node. After a short period of time, the Disk F: G: will be brought online on the second node. Watch the window to see this shift. Quit Cluster Administrator.
Congratulations! You have completed the configuration of the cluster service on all nodes. The server cluster is fully operational. You are now ready to install cluster resources such as file shares, printer spoolers, cluster aware services like Distributed Transaction Coordinator, DHCP, WINS, or cluster-aware programs such as Exchange Server or SQL Server.

Appendix (Guide to Creating and Configuring a Server Cluster under Windows Server 2003 White Paper)

1 out of 3 rated this helpful Rate this topic
Updated: November 11, 2003
Applies To: Windows Server 2003 with SP1

Advanced Testing

Now that you have configured your cluster and verified basic functionality and failover, you may want to conduct a series of failure scenario tests that will demonstrate expected results and ensure the cluster will respond correctly when a failure occurs. This level of testing is not required for every implementation, but may be insightful if you are new to clustering technology and are unfamiliar how the cluster will respond or if you are implementing a new hardware platform in your environment. The expected results listed are for a clean configuration of the cluster with default settings, this does not take into consideration any user customization of the failover logic. This is not a complete list of all tests, nor should successfully completing these tests be considered “certified” or ready for production. This is simply a sample list of some tests that can be conducted. For additional information, see the following article in the Microsoft Knowledge Base:
197047 Failover/Failback Policies on Microsoft Cluster Server
Test: Start Cluster Administrator, right-click a resource, and then click “Initiate Failure”. The resource should go into an failed state, and then it will be restarted and brought back into an online state on that node.
Expected Result: Resources should come back online on the same node
Test: Conduct the above “Initiate Failure” test three more times on that same resource. On the fourth failure, the resources should all failover to another node in the cluster.
Expected Result: Resources should failover to another node in the cluster
Test: Move all resources to one node. Start Computer Management, and then click Services under Services and Applications. Stop the Cluster service. Start Cluster Administrator on another node and verify that all resources failover and come online on another node correctly.
Expected Result: Resources should failover to another node in the cluster
Test: Move all resources to one node. On that node, click Start, and then click Shutdown. This will turn off that node. Start Cluster Administrator on another node, and then verify that all resources failover and come online on another node correctly.
Expected Result: Resources should failover to another node in the cluster
Test: Move all resources to one node, and then press the power button on the front of that server to turn it off. If you have an ACPI compliant server, the server will perform an “Emergency Shutdown” and turn off the server. Start Cluster Administrator on another node and verify that all resources failover and come online on another node correctly. For additional information about an Emergency Shutdown, see the following articles in the Microsoft Knowledge Base:
325343 HOW TO: Perform an Emergency Shutdown in Windows Server 2003
297150 Power Button on ACPI Computer May Force an Emergency Shutdown
Expected Result: Resources should failover to another node in the cluster
ImportantImportant
Performing the Emergency Shutdown test may cause data corruption and data loss. Do not conduct this test on a production server
Test: Move all resources to one node, and then pull the power cables from that server to simulate a hard failure. Start Cluster Administrator on another node, and then verify that all resources failover and come online on another node correctly
Expected Result: Resources should failover to another node in the cluster
ImportantImportant
Performing the hard failure test may cause data corruption and data loss. This is an extreme test. Make sure you have a backup of all critical data, and then conduct the test at your own risk. Do not conduct this test on a production server
Test: Move all resources to one node, and then remove the public network cable from that node. The IP Address resources should fail, and the groups will all failover to another node in the cluster. For additional information, see the following articles in the Microsoft Knowledge Base:
286342 Network Failure Detection and Recovery in Windows Server 2003 Clusters
Expected Result: Resources should failover to another node in the cluster
Test: Remove the network cable for the Private heartbeat network. The heartbeat traffic will failover to the public network, and no failover should occur. If failover occurs, please see the “Configuring the Private Network Adapter” section in earlier in this document
Expected Result: There should be no failures or resource failovers

SCSI Drive Installations

This appendix is provided as a generic set of instructions for SCSI drive installations. If the SCSI hard disk vendor’s instructions conflict with the instructions here, always follow the instructions supplied by the vendor.
The SCSI bus listed in the hardware requirements must be configured prior to cluster service installation. Configuration applies to:
  • The SCSI devices.
  • The SCSI controllers and the hard disks so that they work properly on a shared SCSI bus.
  • Properly terminating the bus. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the information on the following pages, refer to documentation from the manufacturer of your SCSI device or to the SCSI specifications, which can be ordered from the American National Standards Institute (ANSI). The ANSI Web site includes a catalog that can be searched for the SCSI specifications.

Configuring the SCSI Devices

Each device on the shared SCSI bus must have a unique SCSI identification number. Because most SCSI controllers default to SCSI ID 7, configuring the shared SCSI bus includes changing the SCSI ID number on one controller to a different number, such as SCSI ID 6. If there is more than one disk that will be on the shared SCSI bus, each disk must have a unique SCSI ID number.

Terminating the Shared SCSI Bus

There are several methods for terminating the shared SCSI bus. They include:
  • SCSI controllers

    SCSI controllers have internal soft termination that can be used to terminate the bus, however this method is not recommended with the cluster server. If a node is turned off with this configuration, the SCSI bus will be terminated improperly and will not operate correctly.
  • Storage enclosures

    Storage enclosures also have internal termination, which can be used to terminate the SCSI bus if the enclosure is at the end of the SCSI bus. This should be turned off.
  • Y cables

    Y cables can be connected to devices if the device is at the end of the SCSI bus. An external active terminator can then be attached to one branch of the Y cable in order to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators that the device may have.

    Figure 27 outlines how a SCSI cluster should be physically connected.

    4a1ed85e-9da3-477c-b16b-45f57e66849aFigure 27. A diagram of a SCSI cluster hardware configuration.
noteNote
Any devices that are not at the end of the shared bus must have their internal termination disabled. Y cables and active terminator connectors are the recommended termination methods because they will provide termination even when a node is not online.

Storage Area Network Considerations

There are two supported methods of Fibre Channel-based storage in a Windows Server 2003 server cluster: arbitrated loops and switched fabric.
ImportantImportant
When evaluating both types of Fibre Channel implementation, read the vendor’s documentation and be sure you understand the specific features and restrictions of each.
Although the term Fibre Channel implies the use of fiber-optic technology, copper coaxial cable is also allowed for interconnects.

Arbitrated Loops (FC-AL)

A Fibre Channel arbitrated loop (FC-AL) is a set of nodes and devices connected into a single loop. FC-AL provides a cost-effective way to connect up to 126 devices into a single network. As with SCSI, a maximum of two nodes is supported in an FC-AL server cluster configured with a hub. An FC-AL is illustrated in Figure 28.
Figure 28   FC-AL Connection
1e723207-e776-4379-88fc-052520ce0299
FC-ALs provide a solution for two nodes and a small number of devices in relatively static configurations. All devices on the loop share the media, and any packet traveling from one device to another must pass through all intermediate devices.
If your high-availability needs can be met with a two-node server cluster, an FC-AL deployment has several advantages:
  • The cost is relatively low.
  • Loops can be expanded to add storage (although nodes cannot be added).
  • Loops are easy for Fibre Channel vendors to develop.
The disadvantage is that loops can be difficult to deploy in an organization. Because every device on the loop shares the media, overall bandwidth in the cluster is lowered. Some organizations might also be unduly restricted by the 126-device limit.

Switched Fabric (FC-SW)

For any cluster larger than two nodes, a switched Fibre Channel switched fabric (FC-SW) is the only supported storage technology. In an FC-SW, devices are connected in a many-to-many topology using Fibre Channel switches (illustrated in Figure 29).
Figure 29   FC-SW Connection
5d32c2be-32c9-41bb-83d4-0bbf6831fbcf
When a node or device communicates with another node or device in an FC-SW, the source and target set up a point-to-point connection (similar to a virtual circuit) and communicate directly with each other. The fabric itself routes data from the source to the target. In an FC-SW, the media is not shared. Any device can communicate with any other device, and communication occurs at full bus speed. This is a fully scalable enterprise solution and, as such, is highly recommended for deployment with server clusters.
FC-SW is the primary technology employed in SANs. Other advantages of FC-SW include ease of deployment, the ability to support millions of devices, and switches that provide fault isolation and rerouting. Also, there is no shared media as there is in FC-AL, allowing for faster communication. However, be aware that FC-SWs can be difficult for vendors to develop, and the switches can be expensive. Vendors also have to account for interoperability issues between components from different vendors or manufacturers.

Using SANs with Server Clusters

For any large-scale cluster deployment, it is recommended that you use a SAN for data storage. Smaller SCSI and stand-alone Fibre Channel storage devices work with server clusters, but SANs provide superior fault tolerance.
A SAN is a set of interconnected devices (such as disks and tapes) and servers that are connected to a common communication and data transfer infrastructure (FC-SW, in the case of Windows Server 2003 clusters). A SAN allows multiple server access to a pool of storage in which any server can potentially access any storage unit.
The information in this section provides an overview of using SAN technology with your Windows Server 2003 clusters. For additional information about deploying server clusters on SANs, see the Windows Clustering: Storage Area Networks link on the Web Resources page at http://www.microsoft.com/windows/reskits/webresources.
noteNote
Vendors that provide SAN fabric components and software management tools have a wide range of tools for setting up, configuring, monitoring, and managing the SAN fabric. Contact your SAN vendor for details about your particular SAN solution.

SCSI Resets

Earlier versions of Windows server clusters presumed that all communications to the shared disk should be treated as an isolated SCSI bus. This behavior may be somewhat disruptive, and it does not take advantage of the more advanced features of Fibre Channel to both improve arbitration performance and reduce disruption.
One key enhancement in Windows Server 2003 is that the Cluster service issues a command to break a RESERVATION, and the StorPort driver can do a targeted or device reset for disks that are on a Fibre Channel topology. In Windows 2000 server clusters, an entire bus-wide SCSI RESET is issued. This causes all devices on the bus to be disconnected. When a SCSI RESET is issued, a lot of time is spent resetting devices that may not need to be reset, such as disks that the CHALLENGER node may already own.
Resets in Windows 2003 occur in the following order:
  1. Targeted logical unit number (LUN)
  2. Targeted SCSI ID
  3. Entire bus-wide SCSI RESET
noteNote
Targeted resets require functionality in the host bus adapter (HBA) drivers. The driver must be written for StorPort and not SCSIPort. Drivers that use SCSIPort will use the Challenge and Defense the same as it is currently in Windows 2000. Contact the manufacturer of the HBA to determine if it supports StorPort.
SCSI Commands
The Cluster service uses the following SCSI commands:
  • SCSI reserve: This command is issued by a host bus adapter or controller to  maintain ownership of a SCSI device. A device that is reserved refuses all commands from all other host bus adapters except the one that initially reserved it, the initiator. If a bus-wide SCSI reset command is issued, loss of reservation occurs.
  • SCSI release: This command is issued by the owning host bus adapter; it frees a SCSI device for another host bus adapter to reserve.
  • SCSI reset: This command breaks the reservation on a target device. This command is sometimes referred to globally as a "bus reset."
The same control codes are used for Fibre Channel as well. These parameters are defined in this partner article:
309186 How the Cluster Service Takes Ownership of a Disk on the Shared Bus
317162 Supported Fibre Channel Configurations
The following sections provide an overview of SAN concepts that directly affect a server cluster deployment.

HBAs

Host bus adapters (HBAs) are the interface cards that connect a cluster node to a SAN, similar to the way that a network adapter connects a server to a typical Ethernet network. HBAs, however, are more difficult to configure than network adapters (unless the HBAs are preconfigured by the SAN vendor). All HBAs in all nodes should be identical and be at the same driver and firmware revision

Zoning and LUN Masking

Zoning and LUN masking are fundamental to SAN deployments, particularly as they relate to a Windows Server 2003 cluster deployment.
Zoning
Many devices and nodes can be attached to a SAN. With data stored in a single cloud, or storage entity, it is important to control which hosts have access to specific devices. Zoning allows administrators to partition devices in logical volumes and thereby reserve the devices in a volume for a server cluster. That means that all interactions between cluster nodes and devices in the logical storage volumes are isolated within the boundaries of the zone; other noncluster members of the SAN are not affected by cluster activity.
Figure 30 is a logical depiction of two SAN zones (Zone A and Zone B), each containing a storage controller (S1and S2, respectively).
Figure 30   Zoning
1e3b3d17-a6f1-4ede-b007-8fd7a7b8c997
In this implementation, Node A and Node B can access data from the storage controller S1, but Node C cannot. Node C can access data from storage controller S2.
Zoning needs to be implemented at the hardware level (with the controller or switch) and not through software. The primary reason is that zoning is also a security mechanism for a SAN-based cluster, because unauthorized servers cannot access devices inside the zone (access control is implemented by the switches in the fabric, so a host adapter cannot gain access to a device for which it has not been configured). With software zoning, the cluster would be left unsecured if the software component failed.
In addition to providing cluster security, zoning also limits the traffic flow within a given SAN environment. Traffic between ports is routed only to segments of the fabric that are in the same zone.
LUN Masking
A LUN is a logical disk defined within a SAN. Server clusters see LUNs and think they are physical disks. LUN masking, performed at the controller level, allows you to define relationships between LUNs and cluster nodes. Storage controllers usually provide the means for creating LUN-level access controls that allow access to a given LUN to one or more hosts. By providing this access control at the storage controller, the controller itself can enforce access policies to the devices.
LUN masking provides more specific security than zoning, because LUNs provide a means for zoning at the port level. For example, many SAN switches allow overlapping zones, which enable a storage controller to reside in multiple zones. Multiple clusters in multiple zones can share the data on those controllers. Figure 31 illustrates such a scenario.
Figure 31   Storage Controller in Multiple Zones
1da7a0fa-8a63-464a-98b1-890f8928d5ef
LUNs used by Cluster A can be masked, or hidden, from Cluster B so that only authorized users can access data on a shared storage controller.

Requirements for Deploying SANs with Windows Server 2003 Clusters

The following list highlights the deployment requirements you need to follow when using a SAN storage solution with your server cluster. For a white paper that provides more complete information about using SANs with server clusters, see the Windows Clustering: Storage Area Networks link on the Web Resources page athttp://www.microsoft.com/windows/reskits/webresources.
Each cluster on a SAN must be deployed in its own zone. The mechanism the cluster uses to protect access to the disks can have an adverse effect on other clusters that are in the same zone. By using zoning to separate the cluster traffic from other cluster or noncluster traffic, there is no chance of interference.
All HBAs in a single cluster must be the same type and have the same firmware version. Many storage and switch vendors require that all HBAs on the same zone—and, in some cases, the same fabric—share these characteristics.
All storage device drivers and HBA device drivers in a cluster must have the same software version.
Never allow multiple nodes access to the same storage devices unless they are in the same cluster.
Never put tape devices into the same zone as cluster disk storage devices. A tape device could misinterpret a bus rest and rewind at inappropriate times, such as during a large backup.

Guidelines for Deploying SANs with Windows Server 2003 Server Clusters

In addition to the SAN requirements discussed in the previous section, the following practices are highly recommended for server cluster deployment:
In a highly available storage fabric, you need to deploy clustered servers with multiple HBAs. In these cases, always load the multipath driver software. If the I/O subsystem sees two HBAs, it assumes they are different buses and enumerates all the devices as though they were different devices on each bus. The host, meanwhile, is seeing multiple paths to the same disks. Failure to load the multipath driver will disable the second device because the operating system sees what it thinks are two independent disks with the same signature.
Do not expose a hardware snapshot of a clustered disk back to a node in the same cluster. Hardware snapshots must go to a server outside the server cluster. Many controllers provide snapshots at the controller level that can be exposed to the cluster as a completely separate LUN. Cluster performance is degraded when multiple devices have the same signature. If the snapshot is exposed back to the node with the original disk online, the I/O subsystem attempts to rewrite the signature. However, if the snapshot is exposed to another node in the cluster, the Cluster service does not recognize it as a different disk and the result could be data corruption. Although this is not specifically a SAN issue, the controllers that provide this functionality are typically deployed in a SAN environment.
For additional information, see the following articles in the Microsoft Knowledge Base:
301647 Cluster Service Improvements for Storage Area Networks
304415 Support for Multiple Clusters Attached to the Same SAN Device
280743 Windows Clustering and Geographically Separate Sites

Quick Start Guide for Server Clusters

52 out of 68 rated this helpful Rate this topic
Applies To: Windows Server 2003 R2
This guide provides system requirements, installation instructions, and other, step-by-step instructions that you can use to deploy server clusters if you are using Microsoft® Windows Server™ 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, operating systems.
The server cluster technology in Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, helps ensure that you have access to important server-based resources. You can use server cluster technology to create several cluster nodes that appear to users as one server. If one of the nodes in the cluster fails, another node begins to provide service. This is a process known as "failover." In this way, server clusters can increase the availability of critical applications and resources.

Copyright

This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
Copyright © 2005 Microsoft Corporation. All rights reserved.
Microsoft, Windows, Windows NT, SQL Server, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Requirements and Guidelines for Configuring Server Clusters

The section lists requirements and guidelines that will help you set up a server cluster effectively.

Software requirements and guidelines

  • You must have either Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, installed on all computers in the cluster. We strongly recommend that you also install the latest service pack for Windows Server 2003. If you install a service pack, the same service pack must be installed on all computers in the cluster.
  • All nodes in the cluster must be of the same architecture. You cannot mix x86-based, Itanium-based, and x64-based computers within the same cluster.
  • Your system must be using a name-resolution service, such as Domain Name System (DNS), DNS dynamic update protocol, Windows Internet Name Service (WINS), or Hosts file. Hosts file is supported as a local, static file method of mapping DNS domain names for host computers to their Internet Protocol (IP) addresses. The Hosts file is provided in the systemroot\System32\Drivers\Etc folder.
  • All nodes in the cluster must be in the same domain. As a best practice, all nodes should have the same domain role (either member server or domain controller), and the recommended role is member server. Exceptions that can be made to these domain role guidelines are described later in this document.
  • When you first create a cluster or add nodes to it, you must be logged on to the domain with an account that has administrator rights and permissions on all nodes in that cluster. The account does not need to be a Domain Admin level account, but can be a Domain User account with Local Admin rights on each node.

Hardware requirements and guidelines

  • For Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, Microsoft supports only complete server cluster systems chosen from the Windows Catalog. To determine whether your system and hardware components are compatible, including your cluster disks, see the Microsoft Windows Catalog at the Microsoft Web site. For a geographically dispersed cluster, both the hardware and software configuration must be certified and listed in the Windows Catalog. For more information, see article 309395, "The Microsoft support policy for server clusters, the Hardware Compatibility List, and the Windows Server Catalog," in the Microsoft Knowledge Base.
  • If you are installing a server cluster on a storage area network (SAN), and you plan to have multiple devices and clusters sharing the SAN with a cluster, your hardware components must be compatible. For more information, see article 304415, "Support for Multiple Clusters Attached to the Same SAN Device," in theMicrosoft Knowledge Base.
  • You must have two mass-storage device controllers in each node in the cluster: one for the local disk, one for the cluster storage. You can choose between SCSI, iSCSI, or Fibre Channel for cluster storage on server clusters that are running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition. You must have two controllers because one controller has the local system disk for the operating system installed, and the other controller has the shared storage installed.
  • You must have two Peripheral Component Interconnect (PCI) network adapters in each node in the cluster.
  • You must have storage cables to attach the cluster storage device to all computers. Refer to the manufacturer's instructions for configuring storage devices.
  • Ensure that all hardware is identical in all cluster nodes. This means that each hardware component must be the same make, model, and firmware version. This makes configuration easier and eliminates compatibility problems.

Network requirements and guidelines

  • Your network must have a unique NetBIOS name.
  • A WINS server must be available on your network.
  • You must use static IP addresses for each network adapter on each node.
ImportantImportant
Server clusters do not support the use of IP addresses assigned from Dynamic Host Configuration Protocol (DHCP) servers.
  • The nodes in the cluster must be able to access a domain controller. The Cluster service requires that the nodes be able to contact the domain controller to function correctly. The domain controller must be highly available. In addition, it should be on the same local area network (LAN) as the nodes in the cluster. To avoid a single point of failure, the domain must have at least two domain controllers.
  • Each node must have at least two network adapters. One adapter will be used exclusively for internal node-to-node communication (the private network). The other adapter will connect the node to the client public network. It should also connect the cluster nodes to provide support in case the private network fails. (A network that carries both public and private communication is called a mixed network.)
  • If you are using fault-tolerant network cards or teaming network adapters, you must ensure that you are using the most recent firmware and drivers. Check with your network adapter manufacturer to verify compatibility with the cluster technology in Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition.
noteNote
Using teaming network adapters on all cluster networks concurrently is not supported. At least one of the cluster private networks must not be teamed. However, you can use teaming network adapters on other cluster networks, such as public networks.

Storage requirements and guidelines

  • An external disk storage unit must be connected to all nodes in the cluster. This will be used as the cluster storage. You should also use some type of hardware redundant array of independent disks (RAID).
  • All cluster storage disks, including the quorum disk, must be physically attached to a shared bus.
noteNote
This requirement does not apply to Majority Node Set (MNS) clusters when they are used with some type of software replication method.
  • Cluster disks must not be on the same controller as the one that is used by the system drive, except when you are using boot from SAN technology. For more information about using boot from SAN technology, see "Boot from SAN in Windows Server 2003 and Windows 2000 Server" at the Microsoft Web site.
  • You should create multiple logical unit numbers (LUNs) at the hardware level in the RAID configuration instead of using a single logical disk that is then divided into multiple partitions at the operating system level. We recommend a minimum of two logical clustered drives. This enables you to have multiple disk resources and also allows you to perform manual load balancing across the nodes in the cluster.
  • You should set aside a dedicated LUN on your cluster storage for holding important cluster configuration information. This information makes up the cluster quorum resource. The recommended minimum size for the volume is 500 MB. You should not store user data on any volume on the quorum LUN.
  • If you are using SCSI, ensure that each device on the shared bus (both SCSI controllers and hard disks) has a unique SCSI identifier. If the SCSI controllers all have the same default identifier (the default is typically SCSI ID 7), change one controller to a different SCSI ID, such as SCSI ID 6. If more than one disk will be on the shared SCSI bus, each disk must also have a unique SCSI identifier.
  • Software fault tolerance is not natively supported for disks in the cluster storage. For cluster disks, you must use the NTFS file system and configure the disks as basic disks with all partitions formatted as NTFS. They can be either compressed or uncompressed. Cluster disks cannot be configured as dynamic disks. In addition, features of dynamic disks, such as spanned volumes (volume sets), cannot be used without additional non-Microsoft software.
  • All disks on the cluster storage device must be partitioned as master boot record (MBR) disks, not as GUID partition table (GPT) disks.

    Art Image

Deploying SANs with server clusters

This section lists the requirements for deploying SANs with server clusters.
  • Nodes from different clusters must not be able to access the same storage devices. Each cluster used with a SAN must be deployed in a way that isolates it from all other devices. This is because the mechanism the cluster uses to protect access to the disks can have adverse effects if other clusters are in the same zone. Using zoning to separate the cluster traffic from other cluster or non-cluster traffic prevents this type of interference. For more information, see "Zoning vs. LUN masking" later in this guide.
  • All host bus adapters in a single cluster must be the same type and have the same firmware version. Host bus adapters are the interface cards that connect a cluster node to a SAN. This is similar to the way that a network adapter connects a server to a typical Ethernet network. Many storage vendors require that all host bus adapters on the same zone—and, in some cases, the same fabric—share these characteristics.
  • In a cluster, all device drivers for storage and host bus adapters must have the same software version. We strongly recommend that you use a Storport mini-port driver with clustering. Storport (Storport.sys) is a storage port driver that is provided in Windows Server 2003. It is especially suitable for use with high-performance buses, such as Fibre Channel buses, and RAID adapters.
  • Tape devices should never be used in the same zone as cluster disk storage devices. A tape device could misinterpret a bus reset and rewind at inappropriate times, such as when backing up a large amount of data.
  • In a highly available storage fabric, you should deploy server clusters with multiple host bus adapters using multipath I/O software. This provides the highest level of redundancy and availability.

    noteNote
    Failover software for host bus adapters can be version sensitive. If you are implementing a multipath solution for your cluster, you should work closely with your hardware vendor to become fully aware of how the adapter interacts with Windows Server 2003.

Creating a Cluster

It is important to plan the details of your hardware and network before you create a cluster.
If you are using a shared storage device, ensure that when you turn on the computer and start the operating system, only one node has access to the cluster storage. Otherwise, the cluster disks can become corrupted.
In Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, logical disks that are not on the same shared bus as the boot partition are not automatically mounted and assigned a drive letter. This helps prevent a server in a complex SAN environment from mounting drives that might belong to another server. (This is different from how new disks are mounted in Microsoft Windows® 2000 Server operating systems.) Although the drives are not mounted by default, we still recommend that you follow the procedures provided in the table later in this section to ensure that the cluster disks will not become corrupted.
The table in this section can help you determine which nodes and storage devices should be turned on during each installation step. The steps in the table pertain to a two-node cluster. However, if you are installing a cluster with more than two nodes, the Node 2 column lists the required state of all other nodes.

 

StepNode 1Node 2StorageNotes
Set up networks
On
On
Off
Verify that all storage devices on the shared bus are turned off. Turn on all nodes.
Set up cluster disks
On
Off
On
Shut down all nodes. Turn on the cluster storage, and then turn on the first node.
Verify disk configuration
Off
On
On
Turn off the first node, turn on second node. Repeat for nodes three and four if necessary.
Configure the first node
On
Off
On
Turn off all nodes; and then turn on the first node.
Configure the second node
On
On
On
After the first node is successfully configured, turn on the second node. Repeat for nodes three and four as necessary.
Post-installation
On
On
On
All nodes should be turned on.

Preparing to create a cluster

Complete the following three steps on each cluster node before you install a cluster on the first node.
  • Install Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, on each node of the cluster. We strongly recommend that you also install the latest service pack for Windows Server 2003. If you install a service pack, the same service pack must be installed on all computers in the cluster.
  • Set up networks.
  • Set up cluster disks.
All nodes must be members of the same domain. When you create a cluster or join nodes to a cluster, you specify the domain user account under which the Cluster service runs. This account is called the Cluster service account (CSA).

Installing the Windows Server 2003 operating system

Install Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, on each node of the cluster. For information about how to perform this installation, see the documentation you received with the operating system.
Before configuring the Cluster service, you must be logged on locally with a domain account that is a member of the local administrators group.
ImportantImportant
If you attempt to join a node to a cluster that has a blank password for the local administrator account, the installation will fail. For security reasons, Windows Server 2003 operating systems prohibit blank administrator passwords.

Setting up networks

Each cluster node requires at least two network adapters and must be connected by two or more independent networks. At least two LAN networks (or virtual LANs) are required to prevent a single point of failure. A server cluster whose nodes are connected by only one network is not a supported configuration. The adapters, cables, hubs, and switches for each network must fail independently. This usually means that the components of any two networks must be physically independent.
Two networks must be configured to handle either All communications (mixed network) or Internal cluster communications only (private network). The recommended configuration for two adapters is to use one adapter for the private (node-to-node only) communication and the other adapter for mixed communication (node-to-node plus client-to-cluster communication).
You must have two PCI network adapters in each node. They must be certified in the Microsoft Windows Catalogand supported by Microsoft Product Support Services. Assign one network adapter on each node a static IP address, and assign the other network adapter a static IP address on a separate network on a different subnet for private network communication.
Because communication between cluster nodes is essential for smooth cluster operations, the networks that you use for cluster communication must be configured optimally and follow all hardware compatibility-list requirements. For additional information about recommended configuration settings, see article 258750, "Recommended private heartbeat configuration on a cluster server," in the Microsoft Knowledge Base.
You should keep all private networks physically separate from other networks. Specifically, do not use a router, switch, or bridge to join a private cluster network to any other network. Do not include other network infrastructure or application servers on the private network subnet. To separate a private network from other networks, use a cross-over cable in a two-node cluster configuration or a dedicated hub in a cluster configuration of more than two nodes.
Additional network considerations
  • All cluster nodes must be on the same logical subnet.
  • If you are using a virtual LAN (VLAN), the one-way communication latency between any pair of cluster nodes on the VLAN must be less than 500 milliseconds.
  • In Windows Server 2003 operating systems, cluster nodes exchange multicast heartbeats rather than unicast heartbeats. A heartbeat is a message that is sent regularly between cluster network drivers on each node. Heartbeat messages are used to detect communication failure between cluster nodes. Using multicast technology enables better node communication because it allows several unicast messages to be replaced with a single multicast message. Clusters that consist of fewer than three nodes will not send multicast heartbeats. For additional information about using multicast technology, see article 307962, "Multicast Support Enabled for the Cluster Heartbeat," in the Microsoft Knowledge Base.
Determine an appropriate name for each network connection. For example, you might want to name the private network "Private" and the public network "Public." This will help you uniquely identify a network and correctly assign its role.
The following figure shows the elements of a four-node cluster that uses a private network.
Four node server cluster
Setting the order of the network adapter binding
One of the recommended steps for setting up networks is to ensure the network adapter binding is set in the correct order. To do this, use the following procedure.
To set the order of the network adapter binding
  1. To open Network Connections, click Start, click Control Panel, and then double-click Network Connections.
  2. On the Advanced menu, click Advanced Settings.
  3. In Connections, click the connection that you want to modify.
  4. Set the order of the network adapter binding as follows:
    • External public network
    • Internal private network (Heartbeat)
    • [Remote Access Connections]
  5. Repeat this procedure for all nodes in the cluster.
Configuring the private network adapter
As stated earlier, the recommended configuration for two adapters is to use one adapter for private communication, and the other adapter for mixed communication. To configure the private network adapter, use the following procedure.
To configure the private network adapter
  1. To open Network Connections, click Start, click Control Panel, and then double-click Network Connections.
  2. Right-click the connection for the adapter you want to configure, and then click PropertiesLocal Area Properties opens and looks similar to the following figure:
    Local Area Network Properties
  3. On the General tab, verify that the Internet Protocol (TCP/IP) check box is selected, and that all other check boxes in the list are clear.
  4. If you have network adapters that can transmit at multiple speeds and that allow you to specify the speed and duplex mode, manually configure the Duplex Mode, Link Speed, and Flow Control settings for the adapters to the same values and settings on all nodes. If the network adapters you are using do not support manual settings, contact your adapter manufacturer for specific information about appropriate speed and duplex settings for your network adapters. The amount of information that is traveling across the heartbeat network is small, but latency is critical for communication. Therefore, if you have the same speed and duplex settings, this helps ensure that you have reliable communication. If the adapters are connected to a switch, ensure that the port settings of the switch match those of the adapters. If you do not know the supported speed of your card and connecting devices, or if you run into compatibility problems, you should set all devices on that path to 10 megabytes per second (Mbps) and Half Duplex.
    Teaming network adapters on all cluster networks concurrently is not supported because of delays that can occur when heartbeat packets are transmitted and received between cluster nodes. For best results, when you want redundancy for the private interconnect, you should disable teaming and use the available ports to form a second private interconnect. This achieves the same end result and provides the nodes with dual, robust communication paths.
    You can use Device Manager to change the network adapter settings. To open Device Manager, click Start, click Control Panel, double-click Administrative Tools, double-click Computer Management, and then click Device Manager. Right-click the network adapter you want to change, and then click Properties. ClickAdvanced to manually change the speed and duplex mode for the adapter. The page that opens looks similar to the following figure:
    Management Adapter Properties
  5. On the General tab in Network Connections, select Internet Protocol (TCP/IP), and click Properties.
    Internet Protocol (TCP/IP) Properties opens and looks similar to the following figure:
    Internet Protocol (TCP/IP) properties
  6. On the General tab, verify you have selected a static IP address that is not on the same subnet or network as any other public network adapter. You should put the private network adapter in one of the following private network ranges:
    • 10.0.0.0 through 10.255.255.255 (Class A)
    • 172.16.0.0 through 172.31.255.255 (Class B)
    • 192.168.0.0 through 192.168.255.255 (Class C)
  7. On the General tab, verify that no values are defined in Default Gateway under Use the following IP address, and no values are defined under Use the Following DNS server addresses. After you have done so, click Advanced.
  8. On the DNS tab, verify that no values are defined on the page and that the check boxes for Register this connection's addresses in DNS and Use this connection's DNS suffix in DNS registration are clear.
  9. On the WINS tab, verify that no values are defined on the page, and then click Disable NetBIOS over TCP/IP.
    Advanced TCP/IP Settings opens and looks similar to the following figure:
    Advanced TCP/IP settings
  10. After you have verified the information, click OK. You might receive the message "This connection has an empty primary WINS address. Do you want to continue?" To continue, click Yes.
  11. Repeat this procedure for all additional nodes in the cluster. For each private network adapter, use a different static IP address.
Configuring the public network adapter
If DHCP is used to obtain IP addresses, it might not be possible to access cluster nodes if the DHCP server is inaccessible. For increased availability, static, valid IP addresses are required for all interfaces on a server cluster. If you plan to put multiple network adapters in each logical subnet, keep in mind that the Cluster service will recognize only one network interface per subnet.
  • Verifying connectivity and name resolution. To verify that the private and public networks are communicating properly, "ping" all IP addresses from each node. To "ping" an IP address means that you search for and verify it. You should be able to ping all IP addresses, both locally and on the remote nodes. To verify the name resolution, ping each node from a client using the node's computer name instead of its IP address. It should only return the IP address for the public network. You might also want to try using thePING –a command to perform a reverse name resolution on the IP addresses.
  • Verifying domain membership. All nodes in the cluster must be members of the same domain, and they must be able to access a domain controller and a DNS server. They can be configured as member servers or domain controllers. You should have at least one domain controller on the same network segment as the cluster. To avoid having a single point of failure, another domain controller should also be available. In this guide, all nodes are configured as member servers, which is the recommended role.

    In a two-node server cluster, if one node is a domain controller, the other node must also be a domain controller. In a four-node cluster, it is not necessary to configure all four nodes as domain controllers. However, when following a "best practices" model of having at least one backup domain controller, at least one of the remaining three nodes should also be configured as a domain controller. A cluster node must be promoted to a domain controller before the Cluster service is configured.

    The dependence in Windows Server 2003 on DNS requires that every node that is a domain controller must also be a DNS server if another DNS server that supports dynamic updates is not available.

    You should consider the following issues if you are planning to deploy cluster nodes as domain controllers:

    • If one cluster node in a two-node cluster is a domain controller, the other node must also be a domain controller.
    • There are performance implications associated with the overhead of running a computer as a domain controller. There is increased memory usage and additional network traffic from replication because these domain controllers must replicate with other domain controllers in the domain and across domains.
    • If the cluster nodes are the only domain controllers, they each must be DNS servers as well. They should point to themselves for primary DNS resolution and to each other for secondary DNS resolution.
    • The first domain controller in the forest or domain will assume all Operations Master Roles. You can redistribute these roles to any node. However, if a node fails, the Operations Master Roles assumed by that node will be unavailable. Because of this, you should not run Operations Master Roles on any cluster node. This includes Schema Master, Domain Naming Master, Relative ID Master, PDC Emulator, and Infrastructure Master. These functions cannot be clustered for high availability with failover.
    • Because of resource constraints, it might not be optimal to cluster other applications such as Microsoft SQL Server™ in a scenario where the nodes are also domain controllers. This configuration should be thoroughly tested in a lab environment before deployment.
    Because of the complexity and overhead involved when cluster nodes are domain controllers, all nodes should be member servers.
  • Setting up a Cluster service user account. The Cluster service requires a domain user account that is a member of the Local Administrators group on each node. This is the account under which the Cluster service can run. Because Setup requires a user name and password, you must create this user account before you configure the Cluster service. This user account should be dedicated to running only the Cluster service and should not belong to an individual.

    noteNote
    It is not necessary for the Cluster service account (CSA) to be a member of the Domain Administrators group. For security reasons, domain administrator rights should not be granted to the Cluster service account.
    The Cluster service account requires the following rights to function properly on all nodes in the cluster. The Cluster Configuration Wizard grants the following rights automatically:

    • Act as part of the operating system
    • Adjust memory quotas for a process
    • Back up files and directories
    • Restore files and directories
    • Increase scheduling priority
    • Log on as a service
    You should ensure that the Local Administrator Group has access to the following user rights:

    • Debug programs
    • Impersonate a client after authentication
    • Manage auditing and security log
You can use the following procedure to set up a Cluster service user account.
To set up a Cluster service user account
  1. Open Active Directory Users and Computers.
  2. In the console tree, right-click the folder to which you want to add a user account.
    Where?
    • Active Directory Users and Computers/domain node/folder
  3. Point to New, and then click User.
  4. New Object - User opens and looks similar to the following figure:
    New Object - User
  5. Type a first name and last name (these should make sense but are usually not important for this account)
  6. In User logon name, type a name that is easy to remember, such as ClusterService1, click the UPN suffix in the drop-down list, and then click Next.
  7. In Password and Confirm password, type a password that follows your organization's guidelines for passwords, and then select User Cannot Change Password and Password Never Expires. Click Finish to create the account.
    If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the Cluster service configuration on each node before the passwords expire.
  8. In the console tree of the Active Directory Users and Computers snap-in, right-click Cluster, and then clickProperties.
  9. Click Add Members to a Group.
  10. Click Administrators, and then click OK. This gives the new user account administrative permissions on the computer.

Setting up disks

This section includes information and step-by-step procedures you can use to set up disks.
ImportantImportant
To avoid possible corruption of cluster disks, ensure that both the Windows Server 2003 operating system and the Cluster service are installed, configured, and running on at least one node before you start the operating system on another node in the cluster.
Quorum resource
The quorum resource maintains the configuration data necessary for recovery of the cluster. The quorum resource is generally accessible to other cluster resources so that any cluster node has access to the most recent database changes. There can only be one quorum disk resource per cluster.
The requirements and guidelines for the quorum disk are as follows:
  • The quorum disk should be at least 500 MB in size.
  • You should use a separate LUN as the dedicated quorum resource.
  • A disk failure could cause the entire cluster to fail. Because of this, we strongly recommend that you implement a hardware RAID solution for your quorum disk to help guard against disk failure. Do not use the quorum disk for anything other than cluster management.
When you configure a cluster disk, it is best to manually assign drive letters to the disks on the shared bus. The drive letters should not start with the next available letter. Instead, leave several free drive letters between the local disks and the shared disks. For example, start with drive Q as the quorum disk and then use drives R and S for the shared disks. Another method is to start with drive Z as the quorum disk and then work backward through the alphabet with drives X and Y as data disks. You might also want to consider labeling the drives in case the drive letters are lost. Using labels makes it easier to determine what the drive letter was. For example, a drive label of "DriveR" makes it easy to determine that this drive was drive letter R. We recommend that you follow these best practices when assigning driver letters because of the following issues:
  • Adding disks to the local nodes can cause the drive letters of the cluster disks to be revised up by one letter.
  • Adding disks to the local nodes can cause a discontinuous flow in the drive lettering and result in confusion.
  • Mapping a network drive can conflict with the drive letters on the cluster disks.
The letter Q is commonly used as a standard for the quorum disk. Q is used in the next procedure.
The first step in setting up disks for a cluster is to configure the cluster disks you plan to use. To do this, use the following procedure.
To configure cluster disks
  1. Make sure that only one node in the cluster is turned on.
  2. Open Computer Management (Local).
  3. In the console tree, click Computer Management (Local), click Storage, and then click Disk Management.
  4. When you first start Disk Management after installing a new disk, a wizard appears that provides a list of the new disks detected by the operating system. If a new disk is detected, the Write Signature and Upgrade Wizard starts. Follow the instructions in the wizard.
  5. Because the wizard automatically configures the disk as dynamic storage, you must reconfigure the disk to basic storage. To do this, right-click the disk, and then click Convert To Basic Disk.
  6. Right-click an unallocated region of a basic disk, and then click New Partition.
  7. In the New Partition Wizard, click Next, click Primary partition, and then click Next.
  8. By default, the maximum size for the partition is selected. Using multiple logical drives is better than using multiple partitions on one disk because cluster disks are managed at the LUN level, and logical drives are the smallest unit of failover.
  9. Change the default drive letter to one that is deeper into the alphabet. For example, start with drive Q as the quorum disk, and then use drives R and S for the data disks.
  10. Format the partition with the NTFS file system.
  11. In Volume Label, enter a name for the disk; for example, "Drive Q." Assigning a drive label for cluster disks reduces the time it takes to troubleshoot a disk recovery scenario.
    After you have finished entering values for the new partition, it should look similar to the following figure:
    New Partition Wizard - Format Partition
    ImportantImportant
    Ensure that all disks are formatted as MBR; GPT disks are not supported as cluster disks.
After you have configured the cluster disks, you should verify that the disks are accessible. To do this, use the following procedure.
To verify that the cluster disks are accessible
  1. Open Windows Explorer.
  2. Right-click one of the cluster disks, such as "Drive Q," click New, and then click Text Document.
  3. Verify that the text document was created and written to the specified disk, and then delete the document from the cluster disk.
  4. Repeat steps 1 through 3 for all cluster disks to verify that they are all accessible from the first node.
  5. Turn off the first node, and then turn on the second node.
  6. Repeat steps 1 through 3 to verify that the disks are all accessible from the second node.
  7. Repeat again for any additional nodes in the cluster.
  8. When finished, turn off the nodes and then turn on the first node again.

Creating a new server cluster

In the first phase of creating a new server cluster, you must provide all initial cluster configuration information. To do this, use the New Server Cluster Wizard.
ImportantImportant
Before configuring the first node of the cluster, make sure that all other nodes are turned off. Also make sure that all cluster storage devices are turned on.
The following procedure explains how to use the New Server Cluster Wizard to configure the first cluster node.
To configure the first node
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. In the Open Connection to Cluster dialog box, in Action, select Create new cluster, and then click OK.
  3. The New Server Cluster Wizard appears. Verify that you have the necessary information to continue with the configuration, and then click Next to continue.
    Welcome to the New Server Cluster Wizard
  4. In Domain, select the name of the domain in which the cluster will be created. In Cluster name, enter a unique NetBIOS name. It is best to follow the DNS namespace rules when entering the cluster name. For more information, see article 254680, "DNS Namespace Planning," in the Microsoft Knowledge Base.
    Cluster Name and Domain
  5. On the Domain Access Denied page, if you are logged on locally with an account that is not a domain account with local administrative permissions, the wizard will prompt you to specify an account. This is not the account the Cluster service will use to start the cluster.
    noteNote
    If you have the appropriate credentials, the Domain Access Denied screen will not appear.
    Domain Access Denied
  6. Since it is possible to configure clusters remotely, you must verify or type the name of the computer you are using as the first node. On the Select Computer page, verify or type the name of the computer you plan to use.
    noteNote
    The wizard verifies that all nodes can see the cluster disks. In some complicated SANs, the target IDs for the disks might not match on all the cluster nodes. If this occurs, the Setup program might incorrectly determine that the disk configuration is not valid. To address this issue, click Advanced, and then clickAdvanced (minimum) configuration.
    Advanced Configuration Options
  7. On the Analyzing Configuration page, Setup analyzes the node for possible hardware or software issues that can cause installation problems. Review any warnings or error messages that appear. Click Details to obtain more information about each warning or error message.
    Analyzing Configuration
  8. On the IP Address page, type the unique, valid, cluster IP address, and then click Next. The wizard automatically associates the cluster IP address with one of the public networks by using the subnet mask to select the correct network. The cluster IP address should be used for administrative purposes only, and not for client connections.
    IP Address
  9. On the Cluster Service Account page, type the user name and password of the Cluster service account that was created during pre-installation. In Domain, select the domain name, and then click Next. The wizard verifies the user account and password.
    Cluster Service Account
  10. On the Proposed Cluster Configuration page, review the information for accuracy. You can use the summary information to reconfigure the cluster if a system recovery occurs. You should keep a hard copy of this summary information with the change management log at the server. To continue, click Next.
    noteNote
    If you want, you can click Quorum to change the quorum disk designation from the default disk resource. To make this change, in the Quorum resource box, click a different disk resource. If the disk has more than one partition, click the partition where you want the cluster-specific data to be kept, and then click OK.
    Proposed Cluster Configuration
  11. On the Creating the Cluster page, review any warnings or error messages that appear while the cluster is being created. Click to expand each warning or error message for more information. To continue, click Next.
    Art Image
  12. Click Finish to complete the cluster configuration.
    noteNote
    To view a detailed summary, click View Log, or view the text file stored at the following location:
    %SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log
    Completing the New Server Cluster Wizard

Validating the cluster installation

You should validate the cluster configuration of the first node before configuring the second node. To do this, use the following procedure.
To validate the cluster configuration
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. Verify that all cluster resources are successfully up and running. Under State, all resources should be "Online."

Configuring subsequent nodes

After you install the Cluster service on the first node, it takes less time to install it on subsequent nodes. This is because the Setup program uses the network configuration settings configured on the first node as a basis for configuring the network settings on subsequent nodes. You can also install the Cluster service on multiple nodes at the same time and choose to install it from a remote location.
noteNote
The first node and all cluster disks must be turned on. You can then turn on all other nodes. At this stage, the Cluster service controls access to the cluster disks, which helps prevent disk corruption. You should also verify that all cluster disks have had resources automatically created for them. If they have not, manually create them before adding any more nodes to the cluster.
After you have configured the first node, you can use the following procedure to configure subsequent nodes.
To configure the second node
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. In the Open Connection to Cluster dialog box, in Action, select Add nodes to cluster. Then, in Cluster or server name, type the name of an existing cluster, select a name from the drop-down list box, or clickBrowse to search for an available cluster, and then click OK to continue.
  3. When the Add Nodes Wizard appears, click Next to continue.
  4. If you are not logged on with the required credentials, you will be asked to specify a domain account that has administrator rights and permissions on all nodes in the cluster.
  5. In the Domain list, click the domain where the server cluster is located, make sure that the server cluster name appears in the Cluster name box, and then click Next.
  6. In the Computer name box, type the name of the node that you want to add to the cluster. For example, to add Node2, you would type Node2.
    Add Nodes Wizard - Select Computers
  7. Click Add, and then click Next.
  8. When the Add Nodes Wizard has analyzed the cluster configuration successfully, click Next.
  9. On the Cluster Service Account page, in Password, type the password for the Cluster service account. Ensure that the correct domain for this account appears in the Domain list, and then click Next.
  10. On the Proposed Cluster Configuration page, view the configuration details to verify that the server cluster IP address, the networking information, and the managed disk information are correct, and then click Next.
  11. When the cluster is configured successfully, click Next, and then click Finish.

Configuring the server cluster after installation

Heartbeat configuration

After the network and the Cluster service have been configured on each node, you should determine the network's function within the cluster. Using Cluster Administrator, select the Enable this network for cluster use check box and select from among the following options.

 

OptionDescription
Client access only (public network)
Select this option if you want the Cluster service to use this network adapter only for external communication with other clients. No node-to-node communication will take place on this network adapter.
Internal cluster communications only (private network)
Select this option if you want the Cluster service to use this network only for node-to-node communication.
All communications (mixed network)
Select this option if you want the Cluster service to use the network adapter for node-to-node communication and for communication with external clients. This option is selected by default for all networks.
This guide assumes that only two networks are in use. It explains how to configure these networks as one mixed network and one private network. This is the most common configuration.
Use the following procedure to configure the heartbeat.
To configure the heartbeat
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. In the console tree, double-click Cluster Configuration, and then click Networks.
  3. In the details pane, right-click the private network you want to enable, and then click PropertiesPrivate Properties opens and looks similar to the following figure:
    Private Network Properties
  4. Select the Enable this network for cluster use check box.
  5. Click Internal cluster communications only (private network), and then click OK.
  6. In the details pane, right-click the public network you want to enable, and then click Properties. Public Properties opens and looks similar to the following figure:
    Public Network Properties
  7. Select the Enable this network for cluster use check box.
  8. Click All communications (mixed network), and then click OK.
Prioritize the order of the heartbeat adapter
After you have decided the roles in which the Cluster service will use the network adapters, you must prioritize the order in which the adapters will be used for internal cluster communication. To do this, use the following procedure.
To configure network priority
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. In the console tree, click the cluster you want.
  3. On the File menu, click Properties.
    Network Priority
  4. Click the Network Priority tab.
  5. In Networks used for internal cluster communications, click a network.
  6. To increase the network priority, click Move Up; to lower the network priority click Move Down.
  7. When you are finished, click OK.
    noteNote
    If multiple networks are configured as private or mixed, you can specify which one to use for internal node communication. It is usually best for private networks to have higher priority than mixed networks.

Quorum disk configuration

The New Server Cluster Wizard and the Add Nodes Wizard automatically select the drive used for the quorum device. The wizard automatically uses the smallest partition it finds that is larger then 50 MB. If you want to, you can change the automatically selected drive to a dedicated one that you have designated for use as the quorum. The following procedure explains what to do if you want to use a different disk for the quorum resource.
To use a different disk for the quorum resource
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. If one does not already exist, create a physical disk or other storage-class resource for the new disk.
  3. In the console tree, click the cluster name.
  4. On the File menu, click Properties, and then click the Quorum tab. The quorum property page opens and looks similar to the following figure:
    Quorum Properties
  5. On the Quorum tab, click Quorum resource, and then select the new disk or storage-class resource that you want to use as the quorum resource for the cluster.
  6. In Partition, if the disk has more than one partition, click the partition where you want the cluster specific data kept.
  7. In Root path, type the path to the folder on the partition; for example:
    \MSCS

Testing the Server Cluster

After Setup, there are several methods you can use to verify a cluster installation.
  • Use Cluster Administrator. After Setup is run on the first node, open Cluster Administrator, and then try to connect to the cluster. If Setup was run on a second node, start Cluster Administrator on either the first or second node, attempt to connect to the cluster, and then verify that the second node is listed.
  • Services snap-in. Use the Services snap-in to verify that the Cluster service is listed and started.
  • Event log. Use Event Viewer to check for ClusSvc entries in the system log. You should see entries that confirm the Cluster service successfully formed or joined a cluster.

Testing whether group resources can fail over

You might want to ensure that a new group is functioning correctly. To do this, use the following procedure.
To test whether group resources can fail over
  1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click Administrative Tools, and then double-click Cluster Administrator.
  2. In the console tree, double-click the Groups folder.
  3. In the console tree, click a group.
  4. On the File menu, click Move Group. On a multi-node cluster server, when using Move Group, select the node to move the group to. Make sure the Owner column in the details pane reflects a change of owner for all of the group's dependencies.
  5. If the group resources successfully fail over, the group will be brought online on the second node after a short period of time.

SCSI Drive Installations

This section of the guide provides a generic set of instructions for parallel SCSI drive installations.
ImportantImportant
If the SCSI hard disk vendor’s instructions differ from the instructions provided here, follow the instructions supplied by the vendor.
The SCSI bus listed in the hardware requirements must be configured before you install the Cluster service. This configuration applies to the following:
  • The SCSI devices.
  • The SCSI controllers and the hard disks. This is to ensure that they work properly on a shared SCSI bus.
  • The termination of the shared bus. If a shared bus must be terminated, it must be done properly. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the following information, refer to documentation from the manufacturer of your SCSI device.

Configuring SCSI devices

Each device on the shared SCSI bus must have a unique SCSI identification number. Because most SCSI controllers default to SCSI ID 7, configuring the shared SCSI bus includes changing the SCSI ID number on one controller to a different number, such as SCSI ID 6. If there is more than one disk that will be on the shared SCSI bus, each disk must have a unique SCSI ID number.

Storage Area Network Considerations

Fibre Channel systems are required for all server clusters running 64-bit versions of Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition. It is also best to use Fibre Channel systems for clusters of three or more nodes. Two methods of Fibre Channel-based storage are supported in a cluster that is running Windows Server 2003: arbitrated loops and switched fabric.
noteNote
To determine which type of Fibre Channel hardware to use, read the Fibre Channel vendor's documentation.

Fibre Channel arbitrated loops (FC-AL)

A Fibre Channel arbitrated loop (FC-AL) is a set of nodes and devices connected into a single loop. FC-AL provides a cost-effective way to connect up to 126 devices into a single network.
Fibre Channel arbitrated loops provide a solution for a small number of devices in a relatively fixed configuration. All devices on the loop share the media, and any packet traveling from one device to another must pass through all intermediate devices. FC-AL is a good choice if a low number of cluster nodes is sufficient to meet your high-availability requirements.
FC-AL offers the following advantages:
  • The cost is relatively low.
  • Loops can be expanded to add storage; however, nodes cannot be added.
  • Loops are easy for Fibre Channel vendors to develop.
The disadvantage of FC-ALs is that they can be difficult to deploy successfully. This is because every device on the loop shares the media, which causes the overall bandwidth of the cluster to be lower. Some organizations might also not want to be restricted by the 126-device limit. Having more than one cluster on the same arbitrated loop is not supported.

Fiber Channel switched fabric (FC-SW)

With Fibre Channel switched fabric, switching hardware can link multiple nodes together into a matrix of Fibre Channel nodes. A switched fabric is responsible for device interconnection and switching. When a node is connected to a Fibre Channel switching fabric, it is responsible for managing only the single point-to-point connection between itself and the fabric. The fabric handles physical interconnections to other nodes, transporting messages, flow control, and error detection and correction. Switched fabrics also offer very fast switching latency.
The switching fabric can be configured to allow multiple paths between the same two ports. It provides efficient sharing (at the cost of higher contention) of the available bandwidth. It also makes effective use of the burst nature of communications with high-speed peripheral devices.
Fibre Channel Switched Fabric
Other advantages to using switched fabric include the following:
  • It is easy to deploy.
  • It can support millions of devices.
  • The switches provide fault isolation and rerouting.
  • There is no shared media, which allows faster communication in the cluster.

Zoning vs. LUN masking

Zoning and LUN masking are important to SAN deployments, especially if you are deploying a SAN with a server cluster that is running Windows Server 2003.

Zoning

Many devices and nodes can be attached to a SAN. With data stored in a single storage entity (known as a "cloud") it is important to control which hosts have access to specific devices. Zoning allows administrators to partition devices in logical volumes, thereby reserving the devices in a volume for a server cluster. This means that all interactions between cluster nodes and devices in the logical storage volumes are isolated within the boundaries of the zone; other non-cluster members of the SAN are not affected by cluster activity. The elements used in zoning are shown in the following figure:
Zoning
You must implement zoning at the hardware level with the controller or switch, not through software. This is because zoning is a security mechanism for a SAN-based cluster. Unauthorized servers cannot access devices inside the zone. Access control is implemented by the switches in the fabric, so a host adapter cannot gain access to a device for which it has not been configured. With software zoning, the cluster would not be secure if the software component failed.
In addition to providing cluster security, zoning also limits the traffic flow within a given SAN environment. Traffic between ports is routed only to segments of the fabric that are in the same zone.

LUN masking

A logical unit number (LUN) is a logical disk defined within a SAN. Server clusters see LUNs and act as though the LUNs are physical disks. With LUN masking, which is performed at the controller level, you can define relationships between LUNs and cluster nodes. Storage controllers usually provide the means for creating LUN-level access controls that allow one or more hosts to access a given LUN. With access control at the storage controller, the controller itself can enforce access policies to the devices.
LUN masking provides security at a more detailed level than zoning. This is because LUNs allow for zoning at the port level. For example, many SAN switches allow overlapping zones, which enable a storage controller to reside in multiple zones. Multiple clusters in multiple zones can share the data on those controllers. The elements used in LUN masking are shown in the following figure:
LUN Masking

Other Resources

For a comprehensive list of hardware and software supported by Windows operating systems, see one of the following:
For the latest information about Windows Server 2003, see the Windows 2003 Server Web site at the Microsoft Web site.
For interactive help in solving a problem with your computer or to research your problem see Product Support Services at the Microsoft Web site.
For additional information about cluster deployment, see "Designing and Deploying Clusters" at the Microsoft Web site.
For information about troubleshooting, see "Troubleshooting cluster node installations" at the Microsoft Web site.
For information about quorum configuration, see "Quorum Drive Configuration Information" at the Microsoft Web site.
For information about private heartbeat configuration, see "Recommended private 'Heartbeat' configuration on a cluster server" at the Microsoft Web site.
For information about network failure, see "Network Failure Detection and Recovery in a Server Cluster" at theMicrosoft Web site.
For information about quorum disk designation, see "How to Change Quorum Disk Designation" at the Microsoft Web site.
For additional information about Storage Area Networks, see "Microsoft Windows Clustering: Storage Area Networks" at the Microsoft Web site.
For information about geographically dispersed clusters, see "Geographically Dispersed Clusters in Windows Server 2003" at the Microsoft Web site.


Was this page helpful?