The Pool Master
Download > https://fancli.com/2tJJM1
Note that you should either transition the pool master to another host first or put the pool master into maintenance mode before it is rebooted or you could run into some issues. The pool master should never be just rebooted from its running state.
As Alan says, other pools will not be affected at all. The only exception I can think of off-hand would be if you had WLB (workload balancing) running on that particular host and it was controlling WLB on other pools.
If the servers in the pool do not restart, run the following command on any one of the servers, which you want to assign as the pool master: #xe pool-emergency-transition-to-master
After the old master server and the hosts start up again, the servers do not contain the updated details of the changed pool master server. These servers enter into an emergency mode and elect each other as the master server.
This section describes how resource pools can be created through a series of examples using the xe command line interface (CLI). A simple NFS-based shared storage configuration is presented and several simple VM management examples are discussed. It also contains procedures for dealing with physical node failures.
A resource pool comprises multiple Citrix Hypervisor server installations, bound together to a single managed entity which can host Virtual Machines. If combined with shared storage, a resource pool enables VMs to be started on any Citrix Hypervisor server which has sufficient memory. The VMs can then be dynamically moved among Citrix Hypervisor servers while running with a minimal downtime (live migration). If an individual Citrix Hypervisor server suffers a hardware failure, the administrator can restart failed VMs on another Citrix Hypervisor server in the same resource pool. When high availability is enabled on the resource pool, VMs automatically move to another host when their host fails. Up to 64 hosts are supported per resource pool, although this restriction is not enforced.
A pool always has at least one physical node, known as the master. Only the master node exposes an administration interface (used by XenCenter and the Citrix Hypervisor Command Line Interface, known as the xe CLI). The master forwards commands to individual members as necessary.
Citrix Hypervisor servers in resource pools can contain different numbers of physical network interfaces and have local storage repositories of varying size. In practice, it is often difficult to obtain multiple servers with the exact same CPUs, and so minor variations are permitted. If it is acceptable to have hosts with varying CPUs as part of the same pool, you can force the pool-joining operation by passing the --force parameter.
A pool must contain shared storage repositories to select on which Citrix Hypervisor server to run a VM and to move a VM between Citrix Hypervisor servers dynamically. If possible create a pool after shared storage is available. We recommend that you move existing VMs with disks located in local storage to shared storage after adding shared storage. You can use the xe vm-copy command or use XenCenter to move VMs.
Resource pools can be created using XenCenter or the CLI. When a new host joins a resource pool, the joining host synchronizes its local database with the pool-wide one, and inherits some settings from the pool:
VM, local, and remote storage configuration is added to the pool-wide database. This configuration is applied to the joining host in the pool unless you explicitly make the resources shared after the host joins the pool.
The location of the management interface, which remains the same as the original configuration. For example, if the other pool hosts have management interfaces on a bonded interface, the joining host must be migrated to the bond after joining.
Dedicated storage NICs, which must be reassigned to the joining host from XenCenter or the CLI, and the PBDs replugged to route the traffic accordingly. This is because IP addresses are not assigned as part of the pool join operation, and the storage NIC works only when this is correctly configured. For more information on how to dedicate a storage NIC from the CLI, see Manage networking.
The master-address must be set to the fully qualified domain name of Citrix Hypervisor server host1. The password must be the administrator password set when Citrix Hypervisor server host1 was installed.
The CPUs of Citrix Hypervisor servers joining heterogeneous pools must be of the same vendor (that is, AMD, Intel) as the CPUs of the hosts already in the pool. However, the servers are not required to be the same type at the level of family, model, or stepping numbers.
Citrix Hypervisor simplifies the support of heterogeneous pools. Hosts can now be added to existing resource pools, irrespective of the underlying CPU type (as long as the CPU is from the same vendor family). The pool feature set is dynamically calculated every time:
device-config:server Is the host name of the NFS server and device-config:serverpath is the path on the NFS server. As shared is set to true, shared storage is automatically connected to every Citrix Hypervisor server in the pool. Any Citrix Hypervisor servers that join later are also connected to the storage. The Universally Unique Identifier (UUID) of the storage repository is printed on the screen.
As the shared storage has been set as the pool-wide default, all future VMs have their disks created on shared storage by default. For information about creating other types of shared storage, see Storage repository formats.
When you remove (eject) a host from a pool, the machine is rebooted, reinitialized, and left in a state similar to a fresh installation. Do not eject Citrix Hypervisor servers from a pool if there is important data on the local disks.
Do not eject a host from a resource pool if it contains important data stored on its local disks. All of the data is erased when a host is ejected from the pool. If you want to preserve this data, copy the VM to shared storage on the pool using XenCenter, or the xe vm-copy CLI command.
When Citrix Hypervisor servers containing locally stored VMs are ejected from a pool, the VMs will be present in the pool database. The locally stored VMs are also visible to the other Citrix Hypervisor servers. The VMs do not start until the virtual disks associated with them have been changed to point at shared storage seen by other Citrix Hypervisor servers in the pool, or removed. Therefore, we recommend that you move any local storage to shared storage when joining a pool. Moving to shared storage allows individual Citrix Hypervisor servers to be ejected (or physically fail) without loss of data.
Before performing maintenance operations on a host that is part of a resource pool, you must disable it. Disabling the host prevents any VMs from being started on it. You must then migrate its VMs to another Citrix Hypervisor server in the pool. You can do this by placing the Citrix Hypervisor server in to Maintenance mode using XenCenter. For more information, see Run in maintenance mode in the XenCenter documentation.
The Export Resource Data option allows you to generate a resource data report for your pool and export the report into an .xls or .csv file. This report provides detailed information about various resources in the pool such as, servers, networks, storage, virtual machines, VDIs, and GPUs. This feature enables administrators to track, plan, and assign resources based on various workloads such as CPU, storage, and Network.
These additional cipher suites use CBC mode. Although some organizations prefer GCM mode, Windows Server 2012 R2 does not support RSA cipher suites with GCM mode. Clients running on Windows Server 2012 R2 that connect to a Citrix Hypervisor server or pool, might need to use these CBC-mode cipher suites.
When you rotate the pool secret, you are also prompted to change the root password. If you rotated the pool secret because you think that your environment has been compromised, ensure that you also change the root password. For more information, see Change the password.
You can enable IGMP snooping on a pool using either XenCenter or the command-line interface. To enable IGMP snooping using XenCenter, navigate to Pool Properties and select Network Options. For xe commands, see pool-igmp-snooping.
When enabling this feature on a pool, it may also be necessary to enable IGMP querier on one of the physical switches. Or else, multicast in the sub network falls back to broadcast and may decrease Citrix Hypervisor performance.
Now XenCenter is trying to connect to the IP of Node00 which just happened to go down.I could try to use the IP of the remaining pool member. But that would be too easy and has resulted into some:
Despite shifting trends over the years and increased competition, Good Manors has never lost sight of its clear vision to remain client-focused and always maintain a strong emphasis on creating contextual, relevant outdoor living areas, gardens and pools.
2. If your pool master is already down for some reason, you can still make any slave as master of the pool.Issue the following commands from the slave that will become the new master:
A large issue here is that all of the slaves in the pool, just disabled their management interfaces so they can not be connected to using XenCenter (something I did not expect), so I connected to plot2 via SSH
Turns out, when I shut down the pool master, I am shutting down the pool! , I am not simulating an error at all. Somehow the pool master notified the slaves that it was gracefully shutting down, telling the slaves dont worry, I will be all right., the commands above never returned. so I just told plot2 to take over as master to see how we could recover from this situation.
At this point on plot 2, the pool was restored but we could still not connect to the management interfaces of any of the other plots in the pool. But XenCenter WAS able to connect to plot2, and it synchronized the entire pool, showing all of the other hosts (including plot1 which was the master previously) as down. 781b155fdc