User Tools

Site Tools


linux:cluster:stonith

Steps Taken to Set up Cluster

Setting up Stonith

This is from the Redhat Document on how to do it. This document had some issues. Here are steps I took.

# fence_vmware_soap -a 10.64.4.100 -l rhel-license -p HPv1rtual123 --ssl-insecure -z -o list | grep dubaosapp
# pcs stonith create vmfence fence_vmware_soap pcmk_host_map="dubaosapp01.ofice.local:dubaosapp01;dubaosapp02.office.local:dubaosapp02" ipaddr=dubppvc01.office.local ssl_insecure=1 login=rhel-cluster passwd=HPv1rtual123

Use

# pcs stonith show --full  

and verify that ssl_insecure=1 is set otherwise you will not connect to the Vcenter

Also note the user that you use has to have the following permissions on the VMs:

  • System.Anonymous
  • System.View
  • VirtualMachine.Interact.PowerOff
  • VirtualMachine.Interact.PowerOn

You can test that stonith is working by doing

#stonith_admin --reboot node1.example.com

OR

# pcs stonith fence node1.example.com

both will reboot the vm

Setting up the Cluster IP

The Command I used to set this up is

# pcs resource create CLlusterIP IPaddr2 ip=10.65.3.12 cidr_netmask=24 --group AOSgroup

This address will be associated with the AOS group so that we can ensure that the LVM and the IP fail over as a group

Setting up the LVM Shared Storage

THe storage is composed of 2 160GB luns presented to all 4 servers and then used as RDM on the 2 machines.

Things to note on the first machine you set it up as a rdm with all settings to physical and ensure it uses an scsi id of 1.x on the second machine you select use already existing disks and browse for the config files of the 2 VMs. Again ensure that they are setup to scsi id 1.X (same as first node) and that controller and disks are set to physical

Then the after creating the pvcreate and a default lvm group use the following command to set them up as a mirror

# lvcreate --type mirror -L99G -m 1 -n lv-mirror vg-mirror

You then need to ensure the exclusive activation of the LVM by te cluster

Edit /etc/lvm/lvm.com

ensure that

  1. locking_type is set to 1
  2. use_lvmetad is set to 0

use

# vgs --noheadings -o vg_name

To find out the names of your LVMs and edit /etc/lvm/lvm.conf and

  1. Ensure that volume_list = [ “rhel_root”, “rhel_home” ] contain all LVMs other than the one used by the cluster

if you add vg groups latter ensure that they are listed here. otherwise you will get an error

VG is not active locally

to resolve just add the vg group into /etc/lvm/lvm.conf

finally rebuild the initramfs with

# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

finally reboot the machine

on reboot then run

# pcs resource create cluster-lvm LVM volgrpname=vg-mirror exclusive=true --group AOSgroup

This creates the cluster resource cluster-lvm using the volgroup vg-mirror it also associates for the exclusive use of the cluster

# pcs resource create cluster-fs Filesystem device="/dev/vg-mirror/lv-mirror" directory="/shared_disk" fstype="ext4" --group AOSgroup

This creates the shared filesystem and mounts in on /Shared_disk

linux/cluster/stonith.txt · Last modified: by 127.0.0.1