Find it

Tuesday, September 20, 2011

Migrating VERITAS Volumes from one storage array to another storage array

Background:

This week I got a task to Migrate Veritas disk groups & their volumes on a specific server from one storage array to new storage array due to old storage array is ruining out of capacity.

To execute this task I split this into 3 phases -

1. Responsibilities with Storage

     a. Target server has to be zoned to new Storage system
     b. Provide LUNs/capacity to server (total capacity required is 3.5 TB 58x60G & 2x15G LUNs)

2. Responsibilities with me

      a. Migrate Veritas disk groups to new storage array
            A) Label all newly added SAN disks (may be using some scripted method) & make sure SAN disks are visible under VxVM
            B) Initialize all those new SAN disks with VxVM
            C) Add the SAN disk to disk group
            D) Mirror the volumes
            E) Verify if sync is completed
            F) Verify from vxprint if you see a new plex added to designated volumes
            G) If all is well then go ahead and detach old plex
            H) Once plex is disassociated from designated volumes then delete the old plex
            I) Verify if data group(s) & their volumes are on new storage array.
     b. Remove disks associated with old storage from Veritas configuration

3. Responsibilities with Storage

     a. Remove disks associated with old storage from server
     b. Take care of redundant paths

Back out -

     a. Have server full backup handy.

Execution -

List the disk groups needs to be migrated to new storage array.

# vxdg list
NAME STATE ID
xxxx_dg enabled,cds 1279726733.18.xxxxx
localswdg enabled,cds 1279726567.16.xxxxx
nass3_dg enabled 1074844579.1535.nassau3

Get the true picture of your VxVM configuration. Save output of this command for future reference

# vxprint -hrt

Now that Storage has attached SAN disks to servers HBA we need to label them and get them to VERITAS control.

List the existing Disks & new disks detected.

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
UNIX176_0a7a auto:sliced fls02c4_nass3_dg nass3_dg online <<<< All UNIX# disks are from old storage arrary
UNIX176_0a7f auto:sliced fls03c1_nass3_dg nass3_dg online
UNIX176_0a73 auto:sliced fls01c1_nass3_dg nass3_dg online
UNIX176_0a74 auto:sliced fls01c2_nass3_dg nass3_dg online
UNIX176_0a75 auto:sliced fls01c3_nass3_dg nass3_dg online
UNIX176_0a76 auto:sliced fls01c4_nass3_dg nass3_dg online
UNIX176_0a77 auto:sliced fls02c1_nass3_dg nass3_dg online
UNIX176_0a78 auto:sliced fls02c2_nass3_dg nass3_dg online
UNIX176_0a79 auto:sliced fls02c3_nass3_dg nass3_dg online
UNIX176_0a80 auto:sliced fls03c2_nass3_dg nass3_dg online
UNIX176_0a81 auto:sliced fls03c3_nass3_dg nass3_dg online
UNIX176_0a82 auto:sliced fls03c4_nass3_dg nass3_dg online
UNIX176_0dbf auto:cdsdisk xxxx_dg02 xxxxx_dg online
UNIX176_07f8 auto:cdsdisk localswdg01 localswdg online
UNIX176_07f9 auto:cdsdisk xxxx_dg01 xxxx_dg online
UNIX176_09a0 auto:sliced fls09c2_nass3_dg nass3_dg online
UNIX176_09aa auto:sliced fls11c4_nass3_dg nass3_dg online
UNIX176_09ab auto:sliced fls12c1_nass3_dg nass3_dg online
UNIX176_191c auto:cdsdisk localswdg02 localswdg online
UNIX176_0990 auto:sliced fls05c2_nass3_dg nass3_dg online
UNIX176_0991 auto:sliced fls05c3_nass3_dg nass3_dg online

[...]

disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
emc_clariion0_1704 auto - - online invalid   <<<< All emc_* disks are from new storage arrary

emc_clariion0_1705 auto - - online invalid
emc_clariion0_1706 auto - - online invalid
emc_clariion0_1707 auto - - online invalid
emc_clariion0_1708 auto - - online invalid
emc_clariion0_1709 auto - - online invalid
emc_clariion0_1710 auto - - online invalid
emc_clariion0_1713 auto - - online invalid
emc_clariion0_1800 auto - - online invalid
emc_clariion0_1801 auto - - online invalid
emc_clariion0_1803 auto - - online invalid
emc_clariion0_1804 auto - - online invalid
emc_clariion0_1805 auto - - online invalid
emc_clariion0_1806 auto - - online invalid
emc_clariion0_1807 auto - - online invalid
emc_clariion0_1808 auto - - online invalid
emc_clariion0_1809 auto - - online invalid
emc_clariion0_1810 auto - - online invalid
emc_clariion0_3700 auto - - online invalid

[...]

emc_clariion0_5809 auto - - online invalid
emc_clariion0_5858 auto - - online invalid

Initialize the disks with VxVM and add them to appropriate disk group.

For a single disk -

# vxdisksetup -i emc_clariion0_5858 format=sliced

For multiple disks

# vxdisk list | awk '{print $1}' | grep -i emc > /tmp/EMC_disks
# for d in `cat /tmp/EMC_disks` ; do vxdisksetup -i $d format=sliced; done

Add disks to disk group

#!/bin/sh
#for e.g. vxdg -g nass3_dg adddisk nass3_dg01=emc_clariion0_1704
DG=nass3_dg
DISKS=`vxdisk list | awk '{print $3"="$1}' | grep -i emc`
#Output of command vxdisk list | awk '{print $3"="$1}' | grep -i emc is look like as = nass3_dg01=emc_clariion0_1704
for d in $DISKS;
do vxdg -g $DG adddisk $DISKS;
done

OR you can also use vxdiskadm menu based command to perfrom this activity.

Now you should see something like -

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
UNIX176_0a7a auto:sliced fls02c4_nass3_dg nass3_dg online
UNIX176_0a7f auto:sliced fls03c1_nass3_dg nass3_dg online
UNIX176_0a73 auto:sliced fls01c1_nass3_dg nass3_dg online
UNIX176_0a74 auto:sliced fls01c2_nass3_dg nass3_dg online
UNIX176_0a75 auto:sliced fls01c3_nass3_dg nass3_dg online
UNIX176_0a76 auto:sliced fls01c4_nass3_dg nass3_dg online
UNIX176_0a77 auto:sliced fls02c1_nass3_dg nass3_dg online
UNIX176_0a78 auto:sliced fls02c2_nass3_dg nass3_dg online
UNIX176_0a79 auto:sliced fls02c3_nass3_dg nass3_dg online
UNIX176_0a80 auto:sliced fls03c2_nass3_dg nass3_dg online
UNIX176_0a81 auto:sliced fls03c3_nass3_dg nass3_dg online
UNIX176_0a82 auto:sliced fls03c4_nass3_dg nass3_dg online
UNIX176_0dbf auto:cdsdisk xxxx_dg02 xxxxx_dg online
UNIX176_07f8 auto:cdsdisk localswdg01 localswdg online
UNIX176_07f9 auto:cdsdisk xxxx_dg01 xxxx_dg online
UNIX176_09a0 auto:sliced fls09c2_nass3_dg nass3_dg online
UNIX176_09aa auto:sliced fls11c4_nass3_dg nass3_dg online
UNIX176_09ab auto:sliced fls12c1_nass3_dg nass3_dg online
UNIX176_191c auto:cdsdisk localswdg02 localswdg online
UNIX176_0990 auto:sliced fls05c2_nass3_dg nass3_dg online
UNIX176_0991 auto:sliced fls05c3_nass3_dg nass3_dg online

[...]

disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
emc_clariion0_1704 auto:sliced nass3_dg01 nass3_dg online
emc_clariion0_1705 auto:sliced nass3_dg02 nass3_dg online
emc_clariion0_1706 auto:sliced nass3_dg03 nass3_dg online
emc_clariion0_1707 auto:sliced nass3_dg04 nass3_dg online
emc_clariion0_1708 auto:sliced nass3_dg05 nass3_dg online
emc_clariion0_1709 auto:sliced nass3_dg06 nass3_dg online
emc_clariion0_1710 auto:sliced nass3_dg07 nass3_dg online
emc_clariion0_1713 auto:sliced nass3_dg08 nass3_dg online
emc_clariion0_1800 auto:sliced nass3_dg09 nass3_dg online
emc_clariion0_1801 auto:sliced nass3_dg10 nass3_dg online
emc_clariion0_1803 auto:sliced nass3_dg11 nass3_dg online
emc_clariion0_1804 auto:sliced nass3_dg12 nass3_dg online
emc_clariion0_1805 auto:sliced nass3_dg13 nass3_dg online
emc_clariion0_1806 auto:sliced nass3_dg14 nass3_dg online
emc_clariion0_1807 auto:sliced nass3_dg15 nass3_dg online
emc_clariion0_1808 auto:sliced nass3_dg16 nass3_dg online
emc_clariion0_1809 auto:sliced nass3_dg17 nass3_dg online
emc_clariion0_1810 auto:sliced nass3_dg18 nass3_dg online
emc_clariion0_3700 auto:sliced nass3_dg19 nass3_dg online

[...]

emc_clariion0_5809 auto:sliced nass3_dg59 nass3_dg online

Now that we have added new disks to the appropriate datagroups, next task to mirror the volumes.

NOTE: To decide how many disks are require to mirroe a perticular volume is very easy, put those many disks which will meet the current size of volume.

# vxassist -g nass3_dg mirror db_TESTDB_vol alloc=nass3_dg05,nass3_dg02,nass3_dg06,nass3_dg07,nass3_dg08,nass3_dg09,nass3_dg10,nass3_dg11,nass3_dg12,nass3_dg13,nass3_dg14,nass3_dg15,nass3_dg16,nass3_dg17,nass3_dg18,nass3_dg19,nass3_dg20,nass3_dg21,nass3_dg22,nass3_dg23,nass3_dg24,nass3_dg25,nass3_dg26,nass3_dg27,nass3_dg28,nass3_dg29,nass3_dg30,nass3_dg31,nass3_dg32,nass3_dg33,nass3_dg34,nass3_dg35,nass3_dg36,nass3_dg37,nass3_dg38,nass3_dg39,nass3_dg40,nass3_dg41,nass3_dg42,nass3_dg43,nass3_dg44,nass3_dg45,nass3_dg46,nass3_dg47,nass3_dg48,nass3_dg49,nass3_dg50,nass3_dg51,nass3_dg52,nass3_dg53,nass3_dg54,nass3_dg55

Repeat the previous step for rest of the data groups and volumes.

Likewise you can add disks to the remaining volumes and then we have to mirror them.

Check sync progress using -

# vxtask -l list
Task: 5912 RUNNING
Type: ATCOPY
Operation: PLXATT Vol db_TESTDB_vol Plex db_TESTDB_vol-02 Dg nass3_dg
Started: Fri Sep 16 16:05:24 2011
Throttle: 0
Progress: 4.27% 268713984 of 6291456000 Blocks
Work time: 29 minutes, 30 seconds (11:01:11 remaining)

Verify from vxprint , you should see a new plex added to db_TESTDB_vol volume

# vxprint -qthg nass3_dg db_TESTDB_vol

If everything looks good, then detach/disassociate & remove old plex, in short - break the mirror. (Before doing so get application owner consent)

# vxmend -g nass3_dg off db_TESTDB_vol-01
# vxplex -g nass3_dg -o rm dis db_TESTDB_vol-01

Repeat the previous two steps for rest of the plexes.

Well, by this time we can say we are done so verify if datagroup(s) & their volumes are on new storage arrary.

# vxprint -qthg nass3_dg db_TESTDB_vol
v db_TESTDB_vol - ENABLED ACTIVE 6291456000 SELECT db_TESTDB_vol-02 fsgen
pl db_TESTDB_vol-02 db_TESTDB_vol ENABLED ACTIVE 6291456000 STRIPE 3/128 RW
sd nass3_dg05-01 db_TESTDB_vol-02 nass3_dg05 0 125754880 0/0 emc_clariion0_1708 ENA
sd nass3_dg08-01 db_TESTDB_vol-02 nass3_dg08 0 125754880 0/125754880 emc_clariion0_1713 ENA
sd nass3_dg11-01 db_TESTDB_vol-02 nass3_dg11 0 125754880 0/251509760 emc_clariion0_1803 ENA
sd nass3_dg14-01 db_TESTDB_vol-02 nass3_dg14 0 125754880 0/377264640 emc_clariion0_1806 ENA
sd nass3_dg17-01 db_TESTDB_vol-02 nass3_dg17 0 125754880 0/503019520 emc_clariion0_1809 ENA
sd nass3_dg20-01 db_TESTDB_vol-02 nass3_dg20 0 125754880 0/628774400 emc_clariion0_3702 ENA
sd nass3_dg23-01 db_TESTDB_vol-02 nass3_dg23 0 125754880 0/754529280 emc_clariion0_3705 ENA
sd nass3_dg26-01 db_TESTDB_vol-02 nass3_dg26 0 125754880 0/880284160 emc_clariion0_3708 ENA
sd nass3_dg29-01 db_TESTDB_vol-02 nass3_dg29 0 125754880 0/1006039040 emc_clariion0_3800 ENA
sd nass3_dg32-01 db_TESTDB_vol-02 nass3_dg32 0 125754880 0/1131793920 emc_clariion0_3803 ENA

[...]

sd nass3_dg43-01 db_TESTDB_vol-02 nass3_dg43 0 125754880 2/1594132480 emc_clariion0_5704 ENA
sd nass3_dg46-01 db_TESTDB_vol-02 nass3_dg46 0 125754880 2/1719887360 emc_clariion0_5707 ENA
sd nass3_dg50-01 db_TESTDB_vol-02 nass3_dg50 0 125754880 2/1845642240 emc_clariion0_5800
ENA
sd nass3_dg53-01 db_TESTDB_vol-02 nass3_dg53 0 125754880 2/1971397120 emc_clariion0_5803 ENA

Yes, volume has been moved to new storage.

Now we are ready to remove disks associated with old storage from Veritas-configuration.

# vxdg -g nass3_dg rmdisk fls02c4_nass3_dg

Ask storage team to detach/remove the old disks permanently from the server.

This is overall procedure to migrate your data groups & their volumes from one storage to another storage.

9 comments:

  1. how to increase the volume and disk with existing layout ( 9 striped ) in vxvm

    ReplyDelete
  2. Hi,

    I normally follow a standard of creating a stripe set of disks so for e.g. if I've 4 way stripe volume then I create a stripe set of 4 disks like disk1_c0, disk1_c1, disk1_c2 & disk1_c3 and like that disk2_c0, disk2_c1, disk3_c2, disk2_c3 .... disk${n}_c0 ... disk${n}_c4.

    Then while filesystem expansion I use one stripe set so that I'll have continuous disk allocated to volume. It helps to improve overall IO performance.

    e.g. -

    # vxresize -b -F vxvm -g dg1 vol1 +100G disk2_c0 disk2_c1 disk3_c2 disk2_c3

    ReplyDelete
  3. It is really help full my scenario is little different.

    Need to migrate data/apps from Site 1 to Site 2 along with new servers
    Site 1- clustered node with legacy HP storage need to migrate on new EMC storage
    server A
    Server B
    VxVM and VCS 5.1

    Site 2- to new T 5240 servers they are also clustered..
    Server C
    Server D
    VxVM and VCS 5.1

    I think New EMC should be present on all 4 servers (A,B,C,D) and then I can carve Volumes using new EMC disks on servers A,B and then mirror them with old volumes. Since disks are shared across the board (servers A,B,C,D) I should be able to see my volumes on Server C and D (correct me here, I am bit confuse how it would be possible?) And to bring the under cluster control, I can create the volume resources on server C & D. and finally I will break the mirror from servers A & B for those volumes and decom the Sever A and B.

    ReplyDelete
  4. Hi. I am a Storage Foundation specialist with Symantec.
    If you have not tried it recently, I would recommend using Veritas Operations Manager for all storage migrations. VOM contains several very powerful wizards to help with complicated admin tasks. In additional to active management of all SF products, it is also a best of breed Storage Resource Management system, mapping data from the database table space to the physical disk on your array.
    And, it is available to anyone with a valid SF license.

    Clifford Barcliff
    clifford_barcliff@symantec.com
    www.sort.symantec.com

    ReplyDelete
  5. I would like to thank you for providing a simple model we have used at SAP in the US to migrate luns on old EMC storage to new storage. Your instructions and examples worked perfectly for us. Thanks again,

    greg pearson
    greg.pearson@sap.com

    ReplyDelete
  6. Hi Nilesh,
    Thanks for sharing a good article. I would suggest to add one more point over here (at the end), I think you've missed that.

    Once you remove the disks from VxVM control you have to remove the disks from OS configuration as well before SAN team unzone those disks else you will receive I/O errors on system logs for those disks.

    Rgds,
    Arunabh

    ReplyDelete
  7. Would it be possible to update the part where you offline the old plex? Show the new/old plex and then show the command offlining the old plex.

    ReplyDelete
  8. BlueHost is the best hosting provider with plans for any hosting requirements.

    ReplyDelete
  9. If you want your ex-girlfriend or ex-boyfriend to come crawling back to you on their knees (no matter why you broke up) you need to watch this video
    right away...

    (VIDEO) Text Your Ex Back?

    ReplyDelete