How to make Cluster HA for Proxmox 6.x(debian 10) with multipath ISCSI

Latest Comments

Görüntülenecek bir yorum yok.

Step 1. Installations proxmox 6.0 on node01-node02-node03 machine  

Change settings of network interface

Create a management access network and set ip address. It should be virtual bridge mode. You can make this from gui or console. Add your management network like below.   

nano or vi etc/network/interfaces

auto vmbr0 
iface vmbr0 inet static 
address  10.64.100.100 
netmask  24 
gateway  10.64.100.254 
bridge-ports eno5 
bridge-stp off 
bridge-fd 0 
#Management 

Create a corporate network and set this to vlan.

Add your corporate-network like below.

nano or vi etc/network/interfaces

auto vmbr1 
iface vmbr1 inet manual 
bridge-ports eno6 eno7 eno8 
bridge-stp off 
bridge-fd 0 
bridge-vlan-aware yes 
bridge-vids 2-4094 
#Corporate Network

Create bonding network interface for cluster HA and network should be set active-backup or lacp method. (if there is lacp support on switch and storage, also this network card should be 10 Gigabit) You can make this from gui or console 

Add your iscsi-network like below.

nano or vi etc/network/interfaces 

auto bond0 
iface bond0 inet static 
address  10.64.200.10 
netmask  24 
bond-slaves ens1f0 ens1f1 

bond-miimon 100
bond-mode active-backup 

#Datastore-Network 

Set ip address by “ipv4” for ISCSI network between node and storage, also this network should be isolated from another access network.

if you want to make trunk protocol, a virtual bridge must be created for access network of servers.

Cluster must first be installed and then these other nodes must be included over the management network or from a separate network.

Step 2. Storage must be installed. We used HPE MSA series for this.

First we have to add hosts in storage. After we must create cluster group and include this cluster group of the hosts.

All servers initiator name should be added in storage

For this, connect to node and see config file with use cat command “cat /etc/iscsi/initiatorname.iscsi” 

Must be created a pool and to include disk groups

Create a virtual volume group and add these pools of the volume group, then set lun-id and port numbers

Finally, volume groups should be mapped

Step 3. ISCSI must be added from proxmox gui menu. For this bonding network(active-backup) must be used over the all nodes for minimum two ip addresses 

ISCSI multipath must be created over all nodes like below.

Modify to iscsi configuration ;

nano or vi /etc/iscsi/iscsid.conf 

node.startup = automatic 
node.session.timeo.replacement_timeout = 15 

Install multipath tools for all nodes ;

apt update 

apt install multipath-tools 

Create a multipath configuration file under “etc” 

nano /etc/multipath.conf 

ctrl+o
ctrl+x
save and exit

wwid should be added to multipath config file for this we can see wwid with the following command.

/lib/udev/scsi_id -g -u -d /dev/sdc for storage controller-a  

/lib/udev/scsi_id -g -u -d /dev/sdd for storage controller-b 

This wwid number will be the same for two tcp connection. For example :

9400c0ff0002295634c325f7e04000000

Now, edit multipath.conf ; 

defaults { 

    find_multipaths no # for debian 10 and proxmox 6.x 

    user_friendly_names yes 

    polling_interval           10 

    path_selector              "round-robin 0" 

    path_grouping_policy       group_by_prio 

    prio                       alua    

    path_checker               tur     

    rr_min_io_rq               100 

    flush_on_last_del          no 

    max_fds                    "max" 

    rr_weight                  uniform    

    failback                   immediate 

    no_path_retry              18     

    queue_without_daemon        

} 

  

blacklist { 

     devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" 

     devnode "^hd[a-z][[0-9]*]" 

     devnode "^sda[[0-9]*]" 

     devnode "^cciss!c[0-9]d[0-9]*" 

} 

  

multipaths { 

    multipath { 

        wwid    9400c0ff0002295634c325f7e04000000 

        alias   msa-storage 

    } 

} 

Save and exit file : ctrl+o then ctrl+x

Restart multipath service;

systemctl restart multipath-tools.service 

Display status multipath;

multipath –ll  

msa-storage (9400c0ff0002295634c325f7e04000000) dm-5 HPE,MSA 2052 SAN 

size=39T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw 

|-+- policy='round-robin 0' prio=50 status=active 

| `- 2:0:0:0 sdc 8:32 active ready running 

`-+- policy='round-robin 0' prio=10 status=enabled 

  `- 3:0:0:0 sdd 8:48 active ready running 

Volume group must be created in terminal screen and ssh or console connection via vnc should be used for this

First we have to see which pv. For this we have to look at the with command below.

pvs ()pvdisplay, vgs or fdisk –l  

create a volume group like:

vgcreate vg-name /dev/mapper/msa-storage 

Then add lvm disk in proxmox gui to datacenter

Steps >> Datacenter > Storage > add > LVM  

ID : lvm-name 
Base storage : Existing volume groups  
Volume group : vg-name 
Content : Disk Image, Container            
Nodes : All 
Enable : checked  

Shared : Checked 

  Will be automatically displayed on the dashboard

Node01 >> lvm-name example : Datacenter 
Node02 >> lvm-name example : Datacenter 

Node03 >> lvm-name example : Datacenter 

Successful !!!!!! 

You can create virtual machine in Datacenter then you can see vm-disk status in nodes with the command : lvs

Display volume groups command : vgs 

Display physical volume : pvs or details pvdisplay 

Display iscsi status : iscsiadm –m session 

You can close iscsi connection : iscsiadm –m node –T “iscsi-iqn-number” –logout

Resources: https://www.youtube.com/watch?v=H6pcp66jE44

CATEGORIES

Proxmox

No responses yet

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir