CentOS7 Targetcli Configuration lost after server reboot
Solution 1
Ensure the service is enabled before you reboot:
systemctl enable target
Helps here.
mistige
Solution 2
You should be running target.service at boot in order to restore the LIO configuration, and also ensure that iscsid.service is running to export your LIO devices and that tgtd is not running since it will conflict with the other LIO daemons.
Should look something like this,
root@centos7host# systemctl | grep "target.service\|iscsi"
iscsi-shutdown.service
loaded active exited Logout off all iSCSI sessions on shutdown
iscsi.service
loaded active exited Login and scanning of iSCSI devices
iscsid.service
loaded active running Open-iSCSI
iscsiuio.service
loaded active running iSCSI UserSpace I/O driver
target.service
loaded active exited Restore LIO kernel target configuration
iscsid.socket
loaded active running Open-iSCSI iscsid Socket
iscsiuio.socket
loaded active running Open-iSCSI iscsiuio Socket
You'll also want to cleanup whatever you did before this because it'll get confusing. You've likely got volumes that were created outside of LIO so when you go to manage them with targetcli later you'll have some things that are not properly exported and it'll become confusing.
If it's possible I'd recommend wiping the system and making a clean start if you have this option. Getting the iscsi subsystem setup correctly from the start is pretty important since it's dangerous to work with after it's running since you've got a lot of potentially destructive actions you can make to your users data.
Solution 3
In case you use LVM managed storage pool for backstore devices, you should make certain that LVM/Devicemapper discards the second layer VGs/LVs.
What i meant by second layer VGs/LVs; for example:
Assume that the below LVs (DISK_1) has another VG initilized by iSCSI client, and used for the services within client. There will be two different VG layer in one disk, one VG within another.
If your LVM subsystem scans for VGs within the first layer LVs, newly discovered second Layer VGs and LVs within it will be mapped to the Target server. Since LV's are mapped to the target server (by devicemapper), lio_target modules will fail to load them as backstores.
[root@target ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpatha STORAGE_POOL lvm2 a-- 12.00t 2.50t
[root@target ~]# lvs
LV VG Attr LSize
DISK_1 STORAGE_POOL -wi-ao---- 5.00t
DISK_2 STORAGE_POOL -wi-ao---- 1.00t
DISK_3 STORAGE_POOL -wi-ao---- 2.50t
DISK_4 STORAGE_POOL -wi-ao---- 1.00t
[root@target ~]#
LVM searches for VGs and LVs during booting OS. That is why you didn't realized the issue in first place.
You should set a LVM filter to scan for new VGs within disks. See lvm.conf manual for global_filter. Using this configuration, you will be able to discard second layer VGs. Below is a sample for above storage architecture, only to scan VGs within PVs, and discard rest of all block devices.
[root@target ~]# diff /etc/lvm/lvm.conf{,.orginal}
152d151
< global_filter = [ "a|/dev/mapper/mpath.|", "r|.*/|" ]
[root@target ~]#
You can simply use a script for running "vgchange -an 2nd_layer_VG" after bootup and restore LIO-target configuration. However i suggest using LVM's "global_filter" feature.
Note: Before CentOS 7/Red Hat 7, there was no problem on initilizing the second layer LVs, targetd were still able to load them as LUNs. However, new linux-iscsi(LIO) failt in that situation. I didn't reaserch the issue further.
Regards...
Related videos on Youtube
ahmedjaad
Updated on September 18, 2022Comments
-
ahmedjaad over 1 year
Targetcli configuration got lost after server was rebooted, i tried to restore configuration from backup files with
targetcli restoreconfig <backupFile>
configuration is not restored output message of the commandstorageobjects or targets present, not restoring
. below are outputs oftargetcli ls
andsystemctl status -l target
targetcli ls o- / ................................................................................................ [...] o- backstores ..................................................................................... [...] | o- block ......................................................................... [Storage Objects: 0] | o- fileio ........................................................................ [Storage Objects: 0] | o- pscsi ......................................................................... [Storage Objects: 0] | o- ramdisk ....................................................................... [Storage Objects: 0] o- iscsi ................................................................................... [Targets: 1] | o- iqn.2017-01.com.urgroup-tz:target ........................................................ [TPGs: 1] | o- tpg1 ...................................................................... [no-gen-acls, no-auth] | o- acls ................................................................................. [ACLs: 1] | | o- iqn.2017-01.com.urgroup-tz:initiator ........................................ [Mapped LUNs: 0] | o- luns ................................................................................. [LUNs: 0] | o- portals ........................................................................... [Portals: 1] | o- 0.0.0.0:3260 ............................................................................ [OK] o- loopback ................................................................................ [Targets: 0] # systemctl status -l target ● target.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled) Active: active (exited) since Ij 2017-03-10 17:18:43 EST; 1 day 18h ago Main PID: 1342 (code=exited, status=0/SUCCESS) CGroup: /system.slice/target.service Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject tools_disk: Cannot configure StorageObject because device /dev/cl/tools_lv is already in use, skipped Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject bamboo_disk: Cannot configure StorageObject because device /dev/cl/bamboo_lv is already in use, skipped Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject metadata_disk: Cannot configure StorageObject because device /dev/cl/ovirt_domain_metadata is already in use, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 2, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 1, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 0, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 0 for MappedLUN 0, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 1 for MappedLUN 1, skipped Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 2 for MappedLUN 2, skipped Mac 10 17:18:43 server1 systemd[1]: Started Restore LIO kernel target configuration.
-
ahmedjaad about 7 yearsI just wished you had answered this question a bit earlier, this seems like exactly what my situation was, an iSCISI initiator used the storage blocks to create another VG, since this did not happen on production environment after spending some days on this issue i decided to give up and remove the second layer VGs/LVs and restore targetcli configuration, i will simulate the issue and try your solution.