device mapper on RHEL6 unable to create devs for LVM logical volume
Solution 1
See devices.txt
in the kernel documentation: Major 202 is "Xen Virtual Block Device", major 253 is LVM / device mapper.
All your dm-x
devices are 253:n
; they just point to 202:n
.
The error message is clear:
device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
It seems there has been a change to the XEN device. Your vg_ALHPRD-oradata
cannot be loaded because it tries to access storage space on 202:16
which simply doesn't exist.
Solution 2
Looks like multipath on the Hypervisor refuses to update its maps for LUN sizes.
This LUN was originally 28Gb and was later grown to 48Gb on the storage array.
The VG information thinks its 48G and indeed this disc is 48G, but multipath won't update and thinks it's still 28G.
Multipath clinging to 28G:
# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 8:0:0:22 sdt 65:48 active undef running
|- 10:0:0:22 sdbh 67:176 active undef running
|- 7:0:0:22 sddq 71:128 active undef running
|- 9:0:0:22 sdfb 129:208 active undef running
|- 8:0:1:22 sdmz 70:432 active undef running
|- 7:0:1:22 sdoj 128:496 active undef running
|- 10:0:1:22 sdop 129:336 active undef running
|- 9:0:1:22 sdqm 132:352 active undef running
|- 7:0:2:22 sdxh 71:624 active undef running
|- 8:0:2:22 sdzy 131:704 active undef running
|- 10:0:2:22 sdaab 131:752 active undef running
|- 9:0:2:22 sdaed 66:912 active undef running
|- 7:0:3:22 sdakm 132:992 active undef running
|- 10:0:3:22 sdall 134:880 active undef running
|- 8:0:3:22 sdamx 8:1232 active undef running
`- 9:0:3:22 sdaqa 69:1248 active undef running
Real disc size on storage:
# showvv ALHIDB_SNP_001
-Rsvd(MB)-- -(MB)-
Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize
4098 ALHIDB_SNP_001 snp vcopy ALHIDB_SNP_001.ro 5650 RW normal -- -- -- 49152
Just to be sure I have the right disc:
# showvlun -showcols VVName,VV_WWN| grep -i 0002acf962421ba
ALHIDB_SNP_001 50002ACF962421BA
And the VG thinks its 48G
--- Volume group ---
VG Name vg_ALHINT
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 30
VG Access read/write
VG Status exported/resizable
MAX LV 0
Cur LV 5
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 48.00 GiB
PE Size 4.00 MiB
Total PE 12287
Alloc PE / Size 12287 / 48.00 GiB
Free PE / Size 0 / 0
VG UUID qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem
When I rescan the HBAs for new discs and reconfigure multipthing, the disc still displays 28G, so I tried this an dhad no change:
# multipathd -k'resize map 350002acf962421ba'
Versions:
lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5
Workaround Because I could not think of solutions I did this: I did not write earlier that I run OVM 3.2 on top of, so part of my solution will include OVM. i) Shutdown guests on Xen via OVM. ii) Remove discs iii) Delete LUNs from OVM iv) Unpresent LUNs from hypervisors. v) OVM rescan storage. vi) Wait for 30 mins ;) vii) Present my discs to the Hypervisors with different LUN IDs. viii) OVM rescan storage
And now fantastically I see 48G discs.
# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 9:0:0:127 sdt 65:48 active undef running
|- 9:0:1:127 sdbh 67:176 active undef running
|- 9:0:2:127 sddo 71:96 active undef running
|- 9:0:3:127 sdfb 129:208 active undef running
|- 10:0:3:127 sdmz 70:432 active undef running
|- 10:0:0:127 sdoh 128:464 active undef running
|- 10:0:1:127 sdop 129:336 active undef running
|- 10:0:2:127 sdqm 132:352 active undef running
|- 7:0:1:127 sdzu 131:640 active undef running
|- 7:0:0:127 sdxh 71:624 active undef running
|- 7:0:3:127 sdaed 66:912 active undef running
|- 7:0:2:127 sdaab 131:752 active undef running
|- 8:0:0:127 sdakm 132:992 active undef running
|- 8:0:1:127 sdall 134:880 active undef running
|- 8:0:2:127 sdamx 8:1232 active undef running
`- 8:0:3:127 sdaqa 69:1248 active undef running
Related videos on Youtube
si_l
Updated on September 18, 2022Comments
-
si_l almost 2 years
I have XEN guest running RHEL6, and it has a LUN presented from the Dom0. This contains an LVM volume groups called vg_ALHINT (INT for Integration and ALH is an abbreviation of its Oracle database name). The data is Oracle 11g. The VG was imported, activated and udev created the maps for each logical volume.
However device mapper did not create mappings for one of the logical volumes [LV], and for the LV in question it created /dev/dm-2 with different major minor number compared to the rest of LVs.
# dmsetup table vg_ALHINT-arch: 0 4300800 linear 202:16 46139392 vg0-lv6: 0 20971520 linear 202:2 30869504 vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632 vg0-lv5: 0 2097152 linear 202:2 28772352 vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000 vg0-lv4: 0 524288 linear 202:2 28248064 vg0-lv3: 0 4194304 linear 202:2 24053760 vg_ALHINT-oradata: ** vg0-lv2: 0 4194304 linear 202:2 19859456 vg0-lv1: 0 2097152 linear 202:2 17762304 vg0-lv0: 0 17760256 linear 202:2 2048 vg_ALHINT-admin: 0 4194304 linear 202:16 41945088
** You can see above vg_ALHINT-oradata is empty.
# ls -l /dev/mapper/ total 0 [root@iui-alhdb01 ~]# ls -l /dev/mapper/ total 0 crw-rw---- 1 root root 10, 58 Apr 3 13:43 control lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv0 -> ../dm-0 lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Apr 3 14:35 vg0-lv2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv6 -> ../dm-6 lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-admin -> ../dm-8 lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-arch -> ../dm-9 brw-rw---- 1 root disk 253, 7 Apr 3 14:37 vg_ALHINT-oradata lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset1 -> ../dm-10 lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset2 -> ../dm-11
vg_ALHINT-oradata was not created until when I ran
dmsetup mknodes
# cat /proc/partitions major minor #blocks name 202 0 26214400 xvda 202 1 262144 xvda1 202 2 25951232 xvda2 253 0 8880128 dm-0 253 1 1048576 dm-1 253 2 2097152 dm-2 253 3 2097152 dm-3 253 4 262144 dm-4 253 5 1048576 dm-5 253 6 10485760 dm-6 202 16 29360128 xvdb 253 8 2097152 dm-8 253 9 2150400 dm-9 253 10 2093056 dm-10 253 11 2097152 dm-11
dm-7 would have been vg_ALHINT-oradata and it's missing. I ran
dmsetup mknodes
anddm-7
was created yet still missing from/proc/paritions
.# ls -l /dev/dm-7 brw-rw---- 1 root disk 253, 7 Apr 3 13:59 /dev/dm-7
Its major and minor numbers are
253:7
yet the devices and the same LVs in its VG have 202:nnlvs
tells me this LV was suspended:# lvs Logging initialised at Thu Apr 3 14:44:19 2014 Set umask from 0022 to 0077 Finding all logical volumes LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv0 vg0 -wi-ao---- 8.47g lv1 vg0 -wi-ao---- 1.00g lv2 vg0 -wi-ao---- 2.00g lv3 vg0 -wi-ao---- 2.00g lv4 vg0 -wi-ao---- 256.00m lv5 vg0 -wi-ao---- 1.00g lv6 vg0 -wi-ao---- 10.00g admin vg_ALHINT -wi-a----- 2.00g arch vg_ALHINT -wi-a----- 2.05g oradata vg_ALHINT -wi-s----- 39.95g safeset1 vg_ALHINT -wi-a----- 2.00g safeset2 vg_ALHINT -wi-a----- 2.00g Wiping internal VG cache
The disc was created from a snapshot from our production databases. Oracle was shutdown and VG had been exported prior to the snapshot. I should note I perform this same take for 100s of databases weekly via a script. Because this was a snapshot then I have the table from device mapper from the original and I used this to try and recreate its missing table:
0 35651584 linear 202:16 2048 35651584 4087808 linear 202:16 50440192 39739392 2097152 linear 202:16 39847936 41836544 41943040 linear 202:16 58714112
After suspending the device with
dmsetup suspend /dev/dm-7
I randmsetup load /dev/dm-7 $table.txt
Next I tried to resume this device,
# dmsetup resume /dev/dm-7 device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument Command failed #
Any ideas because I'm really lost. (Yep I've rebooted and re-snapshotted this lots and always have the same problem. I've even reinstalled this server and run
yum update
.)// EDIT.
I forgot to add that this is the original dmsetup table from our production environment and I tried to load the oradata layout into our integration server like I noted above.
# dmsetup table vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632 vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000 vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048 vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192 vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936 vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112 vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088
//EDIT
I ran vgscan --mknodes and had:
The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation. # ls -l /dev/vg_ALHINT/oradata lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata
Still cannot activate this and had this error message:
device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7)
//EDIT
I see stack traces in /var/log/messages:
Apr 3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled Apr 3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table Apr 3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256 Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync Apr 3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256 Apr 3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds. Apr 3 14:33:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1 Apr 3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 3 14:33:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084 Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8 Apr 3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8 Apr 3 14:33:34 iui-alhdb01 kernel: Call Trace: Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod] Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4] Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4] Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4] Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200 Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds. Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1 Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 3 14:35:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084 Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8 Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8 Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace: Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod] Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4] Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4] Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4] Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds. Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1 Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 3 14:35:34 iui-alhdb01 kernel: vgdisplay D 0000000000000000 0 1437 1423 0x00000080 Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c Apr 3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380 Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8 Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace: Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod] Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290 Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b Apr 3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256 Apr 3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256 Apr 3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch Apr 3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256 Apr 3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
-
Hauke Laging about 10 yearsPlease delete all these comments and edit the question instead. Add the information there.
-
si_l about 10 years@Hauke Laging ok done like you requested.
-
Nils about 10 yearsYour Dom0 is wich os?
-
si_l about 10 yearsDom0 is Oracle Unbreakable Linux, and this was based on RHEL5. We have circa 30 guests running on this server with identical disc setups.
-
si_l about 10 yearsNo LUN were black listed because the guest can see LUN, import the VG and mount all LVs except one.
-
-
si_l about 10 yearsHi Hauke, thank-you. If the devices size was changed, then loading the original table back in via dmsetup should make this work, and doing this didn't. Glad you pointed out the 202/252 numbers.
-
Hauke Laging about 10 years@si_l No, it should not, of course. The problem is (probably) not the DM device configuration but the underlying device (
xvdb
).