Find it

Monday, May 7, 2012

Upgrade Solaris 10 Update 3 to Solaris 10 Update 9 + Patch Upgrade + VxVM 4.1 = A Lengthy but Curious Task

Background –

Recently I’ve been asked to upgrade and patch the system running with Solaris 10 U3 with VxVM 4.1 to Solaris U9 and latest Update 9 kernel patch. OS boot disks are part of VxVM (encapsulated root disk) hence I premeditated to use LiveUpgrade along with Symantec developed & provided LiveUpgrade wrapper scripts namely “vxlustart” & “vxlufinish”. When I actually started performing vxlustart it failed for the reason that someone in past has created few extra volumes on rootdg without un-encapsulating rootdg as vxlustart script was unable to get proper disk slice layout or due to some weird issue and the way partitioning done in past. In addition to this even the slicing was bit confusing at disk level (missing slice on rootmirror disk for one of the OS partition) and I strongly guess that this caused concerns to vxlustart to turn out to be successful. Eventually, I left with only one choice to un-encapsulate rootdg using vxunroot command & then perform the patching along with OS upgrade.

For me this server was like a patient who lost the hope to grow into healthy body ever & it was like a breathing dead body – can only breath (in server language “run”) but can’t do anything useful. Being an UNIX/Solaris system surgeon, I expected to operate on him and get him well-shaped with aesthetic body. Believe me, being an UNIX system surgeon, I love my servers (patients) and yes, of course - I able to successfully offered his life back to him after some never done stuffs!!! and currently server is running like a stunning, most updated DB server ever.

Execution –

Below is comprehensive overview/thought process about steps taken –

- Backed up all added/extra volumes & removed added/extra volumes on rootdg
- Deported all non-rootdg's before executing vxunroot
- Unencapsulated the rootdisk using the vxunroot script
- Rebooted system and got UFS slice based filesystems
- Created alternate boot environment using Live Upgrade and patched/upgraded ABE to Solaris Update 9.
- Booted off from updated and patched ABE, System is now on Update 9 and on latest patch level.
- After patching/upgrading system, added updated disk to rootdg (creation of rootdg with single disk a.k.a encapsulation)
- Created extra volumes while rootdg not mirrored
- Restored data of those extra volumes
- Imported all previously deported data disk group(s) & start, mount volumes in it
- After application owner confirmation, deleted the PBE (Primary Boot Environment), this will free up the disk
- Add disk to rootdg as mirror disk & re-mirror the volumes

Point to ponder – Please take full system backup before acting on below given procedure.

NOTE: Just to refresh the basics before proceeding, according to the fmthard manual page, tag and flag numbers are defined as follows:

tag description
0x00 UNASSIGNED
0x01 BOOT
0x02 ROOT
0x03 SWAP
0x04 USR
0x05 BACKUP
0x06 STAND
0x07 VAR
0x08 HOME


flag description
0x00 MOUNTABLE
0x01 UNMOUNTABLE
0x10 READ-ONLY

Grab as much as information about the server.

- Take a snapshot of VTOC for both disks which are part of rootdg.

root@xxxxxxxx# prtvtoc -s /dev/rdsk/c1t0d0s2
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 31464192 16790400 48254591
1 3 01 0 31464192 31464191
2 5 00 0 286698624 286698623
3 14 01 0 286698624 286698623
4 15 01 286657920 40704 286698623
5 7 00 48254592 16790400 65044991
6 0 00 65044992 16790400 81835391


root@xxxxxxxx# prtvtoc -s /dev/rdsk/c1t2d0s2
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 31504896 16790400 48295295
1 3 01 40704 31464192 31504895
2 5 00 0 286698624 286698623
3 15 01 20352 20352 40703
4 14 01 40704 286657920 286698623

- Take a snapshot of vfstab file.

root@xxxxxxxx# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/vx/dsk/bootdg/swapvol - - swap - no nologging
/dev/vx/dsk/bootdg/swapvol2 - - swap - no nologging
/dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no nologging
/dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no nologging,nosuid
/dev/vx/dsk/bootdg/opt /dev/vx/rdsk/bootdg/opt /opt ufs 2 yes nologging
/dev/vx/dsk/bootdg/crash /dev/vx/rdsk/bootdg/crash /var/crash ufs 2 yes nologging
/dev/vx/dsk/bootdg/oracle /dev/vx/rdsk/bootdg/oracle /opt/oracle vxfs 2 yes -
/dev/vx/dsk/bootdg/opt_openv /dev/vx/rdsk/bootdg/opt_openv /opt/openv vxfs 2 yes -
/dev/vx/dsk/bootdg/pa /dev/vx/rdsk/bootdg/pa /opt/pa vxfs 2 yes -
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes nosuid
/dev/vx/dsk/adbm/oracle /dev/vx/rdsk/adbm/oracle /adbm/oracle vxfs 3 yes -
/dev/vx/dsk/adbm/devel /dev/vx/rdsk/adbm/devel /adbm/devel vxfs 3 yes -
/dev/vx/dsk/adbm/adbm_db /dev/vx/rdsk/adbm/adbm_db /adbm/db vxfs 3 yes -
/dev/vx/dsk/adbm/dumps /dev/vx/rdsk/adbm/dumps /adbm/dumps vxfs 3 yes -

- Take a snapshot of pre-VxVM vfstab file

root@xxxxxxxx# cat /etc/vfstab.prevm
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c1t0d0s1 - - swap - no -
/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no nologging
/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /var ufs 1 no nologging
/dev/dsk/c1t0d0s6 /dev/rdsk/c1t0d0s6 /opt ufs 2 yes nologging
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -

- Take a snapshot of rootdg configuration.

root@xxxxxxxx# vxprint -qht -g rootdg > /var/tmp/rootdg.vxprint
root@xxxxxxxx# cat /var/tmp/rootdg.vxprint
dg rootdg default default 72000 1175781890.8.xxxxxxxx

dm hot01 Disk_4 auto 20095 286657920 SPARE
dm rootdisk Disk_0 auto 40703 286657920 -
dm rootmirror Disk_2 auto 20095 286657920 -

v crash - ENABLED ACTIVE 31457280 SELECT - fsgen
pl crash-01 crash ENABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-05 crash-01 rootdisk 81835391 31464192 0 Disk_0 ENA
pl crash-02 crash ENABLED ACTIVE 31464192 CONCAT - RW
sd rootmirror-05 crash-02 rootmirror 81835392 31464192 0 Disk_2 ENA

v opt - ENABLED ACTIVE 16790400 ROUND - fsgen
pl opt-01 opt ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-03 opt-01 rootdisk 65044991 16790400 0 Disk_0 ENA
pl opt-02 opt ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-04 opt-02 rootmirror 65044992 16790400 0 Disk_2 ENA

v opt_openv - ENABLED ACTIVE 4194304 SELECT - fsgen
pl opt_openv-01 opt_openv ENABLED ACTIVE 4212864 CONCAT - RW
sd rootdisk-07 opt_openv-01 rootdisk 134282495 4212864 0 Disk_0 ENA
pl opt_openv-02 opt_openv ENABLED ACTIVE 4212864 CONCAT - RW
sd rootmirror-07 opt_openv-02 rootmirror 134282496 4212864 0 Disk_2 ENA

v oracle - ENABLED ACTIVE 20971520 SELECT - fsgen
pl oracle-01 oracle ENABLED ACTIVE 20982912 CONCAT - RW
sd rootdisk-06 oracle-01 rootdisk 113299583 20982912 0 Disk_0 ENA
pl oracle-02 oracle ENABLED ACTIVE 20982912 CONCAT - RW
sd rootmirror-06 oracle-02 rootmirror 113299584 20982912 0 Disk_2 ENA

v pa - ENABLED ACTIVE 8388608 SELECT - fsgen
pl pa-01 pa ENABLED ACTIVE 8405376 CONCAT - RW
sd rootdisk-09 pa-01 rootdisk 169226879 8405376 0 Disk_0 ENA
pl pa-02 pa ENABLED ACTIVE 8405376 CONCAT - RW
sd rootmirror-09 pa-02 rootmirror 169226880 8405376 0 Disk_2 ENA

v rootvol - ENABLED ACTIVE 16790400 ROUND - root
pl rootvol-01 rootvol ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-02 rootvol-01 rootdisk 31464191 16790400 0 Disk_0 ENA
pl rootvol-02 rootvol ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-02 rootvol-02 rootmirror 31464192 16790400 0 Disk_2 ENA

v swapvol - ENABLED ACTIVE 31464192 ROUND - swap
pl swapvol-01 swapvol ENABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-B0 swapvol-01 rootdisk 286657919 1 0 Disk_0 ENA
sd rootdisk-01 swapvol-01 rootdisk 0 31464191 1 Disk_0 ENA
pl swapvol-02 swapvol ENABLED ACTIVE 31464192 CONCAT - RW
sd rootmirror-01 swapvol-02 rootmirror 0 31464192 0 Disk_2 ENA

v swapvol2 - ENABLED ACTIVE 30720000 SELECT - fsgen
pl swapvol2-01 swapvol2 ENABLED ACTIVE 30731520 CONCAT - RW
sd rootdisk-08 swapvol2-01 rootdisk 138495359 30731520 0 Disk_0 ENA
pl swapvol2-02 swapvol2 ENABLED ACTIVE 30731520 CONCAT - RW
sd rootmirror-08 swapvol2-02 rootmirror 138495360 30731520 0 Disk_2 ENA

v var - ENABLED ACTIVE 16790400 ROUND - fsgen
pl var-01 var ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-04 var-01 rootdisk 48254591 16790400 0 Disk_0 ENA
pl var-02 var ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-03 var-02 rootmirror 48254592 16790400 0 Disk_2 ENA

root@xxxxxxxx# vxprint -v -g rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v crash fsgen ENABLED 31457280 - ACTIVE - -
v opt fsgen ENABLED 16790400 - ACTIVE - -
v opt_openv fsgen ENABLED 4194304 - ACTIVE - -
v oracle fsgen ENABLED 20971520 - ACTIVE - -
v pa fsgen ENABLED 8388608 - ACTIVE - -
v rootvol root ENABLED 16790400 - ACTIVE - -
v swapvol swap ENABLED 31464192 - ACTIVE - -
v swapvol2 fsgen ENABLED 30720000 - ACTIVE - -
v var fsgen ENABLED 16790400 - ACTIVE - -

- Take a snapshot of all disks on system.

root@xxxxxxxx# vxdisk -eo alldgs list > /var/tmp/vxdisk_list.output
root@xxxxxxxx# cat /var/tmp/vxdisk_list.output
DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME
Disk_0 auto rootdisk rootdg online c1t0d0s2
Disk_2 auto rootmirror rootdg online c1t2d0s2
Disk_4 auto hot01 rootdg online c1t1d0s2
Disk_5 auto - - online c1t3d0s2
clar044_0 auto clar044_0 adbm online c4t5006016939A014F9d4s2
clar044_1 auto clar044_1 adbm online c4t5006016939A014F9d5s2
clar044_2 auto clar044_2 adbm online c4t5006016939A014F9d2s2
clar044_3 auto clar044_3 adbm online c4t5006016939A014F9d7s2
clar044_4 auto clar044_4 adbm online c4t5006016939A014F9d9s2
clar044_5 auto clar044_5 adbm online c4t5006016939A014F9d3s2
clar044_6 auto clar044_6 adbm online c4t5006016939A014F9d12s2
clar044_7 auto clar044_7 adbm online c4t5006016939A014F9d10s2
clar044_8 auto clar044_8 adbm online c4t5006016939A014F9d8s2
clar044_9 auto clar044_9 adbm online c4t5006016939A014F9d6s2
clar044_10 auto clar044_10 adbm online c4t5006016939A014F9d0s2
clar044_11 auto clar044_11 adbm online c4t5006016939A014F9d1s2
clar044_12 auto clar044_12 adbm online c4t5006016939A014F9d11s2
clar045_0 auto clar045_0 adbm online c4t5006016939A01AE9d4s2
clar045_1 auto clar045_1 adbm online c4t5006016939A01AE9d3s2
clar045_2 auto clar045_2 adbm online c4t5006016939A01AE9d7s2
clar045_3 auto clar045_3 adbm online c4t5006016939A01AE9d6s2
clar045_4 auto clar045_4 adbm online c4t5006016939A01AE9d11s2
clar045_5 auto clar045_5 adbm online c4t5006016939A01AE9d0s2
clar045_6 auto clar045_6 adbm online c4t5006016939A01AE9d2s2
clar045_7 auto clar045_7 adbm online c4t5006016939A01AE9d12s2
clar045_8 auto clar045_8 adbm online c4t5006016939A01AE9d8s2
clar045_9 auto clar045_9 adbm online c4t5006016939A01AE9d1s2
clar045_10 auto clar045_10 adbm online c4t5006016939A01AE9d10s2
clar045_11 auto clar045_11 adbm online c4t5006016939A01AE9d9s2
clar045_12 auto clar045_12 adbm online c4t5006016939A01AE9d5s2

- Detach all the plexes associated with the 'rootmirror' disk if applicable & Verify the rootmirror plexes have been detached

root@xxxxxxxx# vxprint -qhtg rootdg -s | grep -i rootmirror | awk '{print $3}' > /var/tmp/subs.plex
root@xxxxxxxx# cat /var/tmp/subs.plex
swapvol-02
rootvol-02
var-02
opt-02
crash-02
oracle-02
opt_openv-02
swapvol2-02
pa-02

root@xxxxxxxx# for x in `cat /var/tmp/subs.plex`
> do
> vxplex -g rootdg dis $x
> vxprint -qhtg rootdg -p $x
> done
pl swapvol-02 - DISABLED - 31464192 CONCAT - RW
sd rootmirror-01 swapvol-02 rootmirror 0 31464192 0 Disk_2 ENA
pl rootvol-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-02 rootvol-02 rootmirror 31464192 16790400 0 Disk_2 ENA
pl var-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-03 var-02 rootmirror 48254592 16790400 0 Disk_2 ENA
pl opt-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-04 opt-02 rootmirror 65044992 16790400 0 Disk_2 ENA
pl crash-02 - DISABLED - 31464192 CONCAT - RW
sd rootmirror-05 crash-02 rootmirror 81835392 31464192 0 Disk_2 ENA
pl oracle-02 - DISABLED IOFAIL 20982912 CONCAT - RW
sd rootmirror-06 oracle-02 rootmirror 113299584 20982912 0 Disk_2 ENA
pl opt_openv-02 - DISABLED IOFAIL 4212864 CONCAT - RW
sd rootmirror-07 opt_openv-02 rootmirror 134282496 4212864 0 Disk_2 ENA
pl swapvol2-02 - DISABLED - 30731520 CONCAT - RW
sd rootmirror-08 swapvol2-02 rootmirror 138495360 30731520 0 Disk_2 ENA
pl pa-02 - DISABLED IOFAIL 8405376 CONCAT - RW
sd rootmirror-09 pa-02 rootmirror 169226880 8405376 0 Disk_2 ENA

- Determine a disk with a valid configuration copy

root@xxxxxxxx# vxdg list rootdg > /var/tmp/vxdg.list.rootdg.ouput
root@xxxxxxxx# cat /var/tmp/vxdg.list.rootdg.ouput
Group: rootdg
dgid: 1175781890.8.xxxxxxxx
import-id: 0.1
flags:
version: 120
alignment: 512 (bytes)
ssb: on
detach-policy: global
dg-fail-policy: dgdisable
copies: nconfig=default nlog=default
config: seqno=0.3810 permlen=14803 free=14771 templen=17 loglen=2243
config disk Disk_0 copy 1 len=30015 state=clean online
config disk Disk_2 copy 1 len=14803 state=clean online
config disk Disk_4 copy 1 len=14803 state=clean online
log disk Disk_0 copy 1 len=4547
log disk Disk_2 copy 1 len=2243
log disk Disk_4 copy 1 len=2243

Check location of the private region on a disk with a valid configuration copy as shown above

NOTE: For non-EFI disks of type sliced, VxVM usually configures partition s3 as the private region, s4 as the public region, and s2 as the entire physical disk. An exception is an encapsulated root disk, on which s3 is usually configured as the public region and s4 as the private region.

In our case s3 has private region.

root@xxxxxxxx# prtvtoc -s /dev/rdsk/c1t2d0s2 > /var/tmp/prtvtoc.c0t2d0s2.output
root@xxxxxxxx# cat /var/tmp/prtvtoc.c0t2d0s2.output
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 31504896 16790400 48295295
1 3 01 40704 31464192 31504895
2 5 00 0 286698624 286698623
3 15 01 20352 20352 40703
4 14 01 40704 286657920 286698623

- Record the rootdg disk group configuration via the 'vxprivutil' command

root@xxxxxxxx# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c1t2d0s3 > /var/tmp/rootdg.conf

To verify the vxprivutil output use the following command

root@xxxxxxxx# cat /var/tmp/rootdg.conf | vxprint -D - -qht
Disk group: rootdg
dg rootdg default default 72000 1175781890.8.xxxxxxxx

dm hot01 - - - - SPARE
dm rootdisk - - - - -
dm rootmirror - - - - -

pl crash-02 - DISABLED - 31464192 CONCAT - RW
sd rootmirror-05 crash-02 rootmirror 81835392 31464192 0 - DIS

pl opt-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-04 opt-02 rootmirror 65044992 16790400 0 - DIS

pl opt_openv-02 - DISABLED IOFAIL 4212864 CONCAT - RW
sd rootmirror-07 opt_openv-02 rootmirror 134282496 4212864 0 - DIS

pl oracle-02 - DISABLED IOFAIL 20982912 CONCAT - RW
sd rootmirror-06 oracle-02 rootmirror 113299584 20982912 0 - DIS

pl pa-02 - DISABLED IOFAIL 8405376 CONCAT - RW
sd rootmirror-09 pa-02 rootmirror 169226880 8405376 0 - DIS

pl rootvol-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-02 rootvol-02 rootmirror 31464192 16790400 0 - DIS

pl swapvol-02 - DISABLED - 31464192 CONCAT - RW
sd rootmirror-01 swapvol-02 rootmirror 0 31464192 0 - DIS

pl swapvol2-02 - DISABLED - 30731520 CONCAT - RW
sd rootmirror-08 swapvol2-02 rootmirror 138495360 30731520 0 - DIS

pl var-02 - DISABLED - 16790400 CONCAT - RW
sd rootmirror-03 var-02 rootmirror 48254592 16790400 0 - DIS

v crash - DISABLED ACTIVE 31457280 SELECT - fsgen
pl crash-01 crash DISABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-05 crash-01 rootdisk 81835391 31464192 0 - DIS

v opt - DISABLED ACTIVE 16790400 ROUND - fsgen
pl opt-01 opt DISABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-03 opt-01 rootdisk 65044991 16790400 0 - DIS

v opt_openv - DISABLED ACTIVE 4194304 SELECT - fsgen
pl opt_openv-01 opt_openv DISABLED ACTIVE 4212864 CONCAT - RW
sd rootdisk-07 opt_openv-01 rootdisk 134282495 4212864 0 - DIS

v oracle - DISABLED ACTIVE 20971520 SELECT - fsgen
pl oracle-01 oracle DISABLED ACTIVE 20982912 CONCAT - RW
sd rootdisk-06 oracle-01 rootdisk 113299583 20982912 0 - DIS

v pa - DISABLED ACTIVE 8388608 SELECT - fsgen
pl pa-01 pa DISABLED ACTIVE 8405376 CONCAT - RW
sd rootdisk-09 pa-01 rootdisk 169226879 8405376 0 - DIS

v rootvol - DISABLED ACTIVE 16790400 ROUND - root
pl rootvol-01 rootvol DISABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-02 rootvol-01 rootdisk 31464191 16790400 0 - DIS

v swapvol - DISABLED ACTIVE 31464192 ROUND - swap
pl swapvol-01 swapvol DISABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-B0 swapvol-01 rootdisk 286657919 1 0 - DIS
sd rootdisk-01 swapvol-01 rootdisk 0 31464191 1 - DIS

v swapvol2 - DISABLED ACTIVE 30720000 SELECT - fsgen
pl swapvol2-01 swapvol2 DISABLED ACTIVE 30731520 CONCAT - RW
sd rootdisk-08 swapvol2-01 rootdisk 138495359 30731520 0 - DIS

v var - DISABLED ACTIVE 16790400 ROUND - fsgen
pl var-01 var DISABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-04 var-01 rootdisk 48254591 16790400 0 - DIS

Well, now we are good to execute vxunroot. We have captured as much as information and we know what was the last good configuration, in case something goes wrong with help of above all information we can fix it.

PS: The 'vxunroot' script will fail if you have other than standard OS volumes (non-system volumes) configured in the rootdg disk group.

root@xxxxxxxx# /etc/vx/bin/vxunroot

   /etc/vx/bin/vxunroot: Error: Disk contains more than 5
   volumes which cannot be unencapsulated. Move some volumes
   to different disk and try again

Now remove the non-system volume with the following command, In this case remove opt_openv, oracle & pa from the rootdg disk group

root@xxxxxxxx# ls -ld /opt/openv
drwxr-xr-x 15 root other 1024 Oct 23 2007 /opt/openv
root@xxxxxxxx# ls -ld /opt/pa
drwxr-xr-x 34 pa dba 2048 Jan 4 2010 /opt/pa
root@xxxxxxxx# ls -ld /opt/oracle/

drwxr-xr-x 6 oracle dba 2048 Feb 9 11:07 /opt/oracle/
root@xxxxxxxx# ls -ld /var/crash/
drwxr-xr-x 4 root root 512 Feb 28 2008 /var/crash/

- Un-mount the filesystems.

root@xxxxxxxx# for vol in /opt/openv /opt/pa /opt/oracle/ /var/crash/
> do
> umount $vol
> done

- Remove volumes.

root@xxxxxxxx# for vol in opt_openv oracle pa crash
> do
> vxedit -g rootdg -rf rm $vol
> done

- Execute "vxunroot"

root@xxxxxxxx# vxunroot
    VxVM vxunroot INFO V-5-2-1562

The following volumes detected on the root disk of your system are not
   derivatives of partitions that were on the pre-encapsulated root disk:


       swapvol2
   VxVM vxunroot NOTICE V-5-2-1560
Please move these volumes and comment out corresponding entries in
/etc/vfstab before you rerun vxunroot.

Ahh.. Error!!! If vxunroot detects volumes other than standard OS volumes then it fails with below error. To get rid of this message let’s remove this extra/added volume.

root@xxxxxxxx# vxedit -g rootdg -rf rm swapvol2
VxVM vxedit ERROR V-5-1-1242 Volume swapvol2 is opened, cannot remove
root@xxxxxxxx# swap -l
swapfile dev swaplo blocks free
/dev/vx/dsk/bootdg/swapvol 270,72002 16 31464176 31401184
/dev/vx/dsk/bootdg/swapvol2 270,72006 16 30719984 30656448
root@xxxxxxxx# swap -d /dev/vx/dsk/bootdg/swapvol2
root@xxxxxxxx# vxedit -g rootdg -rf rm swapvol2

root@xxxxxxxx# vxunroot
    VxVM vxunroot NOTICE V-5-2-1564
This operation will convert the following file systems from
   volumes to regular partitions:

          opt rootvol swapvol var
    VxVM vxunroot INFO V-5-2-2011
Replacing volumes in root disk to partitions will require a system
    reboot. If you choose to continue with this operation, system
    configuration will be updated to discontinue use of the volume
    manager for your root and swap devices.

Do you wish to do this now [y,n,q,?] (default: y) y
   VxVM vxunroot INFO V-5-2-287 Restoring kernel configuration...
   VxVM vxunroot INFO V-5-2-78
A shutdown is now required to install the new kernel.
    You can choose to shutdown now, or you can shutdown later, at your
convenience.

Do you wish to shutdown now [y,n,q,?] (default: n)
    VxVM vxunroot INFO V-5-2-258
Please shutdown before you perform any additional volume manager
    or disk reconfiguration. To shutdown your system cd to / and type

         shutdown -g0 -y -i6

PS: Make sure any data disk group has to be imported before shutdown or in fact before running vxunroot.

Well, after reboot I’m on the normal slice based UFS filesystems –

root@xxxxxxxx# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 7.9G 4.5G 3.3G 58% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 41G 1.2M 41G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
7.9G 4.5G 3.3G 58% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
7.9G 4.5G 3.3G 58% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c1t0d0s5 7.9G 5.4G 2.5G 69% /var
swap 41G 176K 41G 1% /tmp
swap 41G 56K 41G 1% /var/run
/dev/dsk/c1t0d0s6 7.9G 6.5G 1.3G 83% /opt

root@xxxxxxxx# cat /etc/release
Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006

Now let’s start with preparing environment for LiveUpgrade.

- Copy VTOC table of primary boot disk to the disk on which you wish to create ABE.

root@xxxxxxxx# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t2d0s2

- Remove LU packages from system and re-install LU packages from U9 media.

root@xxxxxxxx# pkginfo -x| grep "Live Upgrade"
SUNWlucfg Live Upgrade Configuration
SUNWlur Live Upgrade (root)
SUNWluu Live Upgrade (usr)

 - Verify if all required patches for LU are installed on system.

root@xxxxxxxx# patchadd -p | grep 121430
Patch: 121430-53 Obsoletes: 121435-04 121437-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWlur SUNWluu
Patch: 121430-57 Obsoletes: 121435-04 121437-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWlur SUNWluu
Patch: 121430-68 Obsoletes: 121435-04 121437-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWlur SUNWluu
Patch: 121430-72 Obsoletes: 121435-04 121437-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWlur SUNWluu

root@xxxxxxxx# cat /etc/vfstab |grep c1t0d0 |awk '{print $1, $3}'
/dev/dsk/c1t0d0s1 -
/dev/dsk/c1t0d0s0 /
/dev/dsk/c1t0d0s5 /var
/dev/dsk/c1t0d0s6 /opt

 - Create alternate boot environment.

root@xxxxxxxx# time lucreate -c Sol10u3 -C /dev/dsk/c1t2d0s2 -m -:/dev/dsk/c1t2d0s1:swap -m /:/dev/dsk/c1t2d0s0:ufs -m /var:/dev/dsk/c1t2d0s5:ufs -m /opt:/dev/dsk/c1t2d0s6:ufs -n "Sol10u9"
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named Sol10u3
Creating initial configuration for primary boot environment Sol10u3
The device /dev/dsk/c1t2d0s2 is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name Sol10u3 PBE Boot Device /dev/dsk/c1t2d0s2
Comparing source boot environment Sol10u3 file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t2d0s0 is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment Sol10u9
Source boot environment is Sol10u3
Creating boot environment Sol10u9
Creating file systems on boot environment Sol10u9
Creating ufs file system for / in zone (global) on /dev/dsk/c1t2d0s0
Creating ufs file system for /opt in zone (global) on /dev/dsk/c1t2d0s6
Creating ufs file system for /var in zone (global) on /dev/dsk/c1t2d0s5
Mounting file systems for boot environment Sol10u9
Calculating required sizes of file systems for boot environment Sol10u9
Populating file systems on boot environment Sol10u9
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /
Populating contents of mount point /opt
Populating contents of mount point /var
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment Sol10u9
Creating compare database for file system /var
Creating compare database for file system /opt
Creating compare database for file system /
Updating compare databases on boot environment Sol10u9.
Making boot environment Sol10u9 bootable.
Population of boot environment Sol10u9 successful.
Creation of boot environment Sol10u9 successful.

real 40m17.793s
user 4m45.033s
sys 4m28.322s

root@xxxxxxxx# lustatus

Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10u3 yes yes yes no -
Sol10u9 yes no no yes -

First task is to now patch the system to the latest patch level – 144488-12 (Kernel Bug Fixes post Solaris 10 9/10 (update 9)

All patches are kept under directory /var/logging/patching hence

# cd /var/logging/patching
# luupgrade -n Sol10u9 -s /var/logging/patching -t `cat patch_order`

Once patching done, move forward to OS upgrade.
Let’s now try only dry-run (option –N) to see 'projection' of upgrading new BE to Solaris 10 U9.

root@xxxxxxxx# luupgrade -u -n Sol10u9 -N -s /mnt/
63521 blocks
miniroot filesystem is lofs
Mounting miniroot at /mnt//Solaris_10/Tools/Boot
ERROR: The auto registration file () does not exist or incomplete.
The auto registration file is mandatory for this upgrade.
Use -k (filename) argument along with luupgrade command.

This is known issue. If you try to update to the latest Solaris 10 Update (U9), one new step is now required in order to be able to successfully luupgrade to the desired Update. As mentioned in the Oracle Solaris 10 9/10 Release Notes, a new Auto Registration mechanism has been added to this release to facilitate registering the system using your Oracle support credentials.

Well, if you try the classical luupgrade following incantation, it will fail with the above reported message

So, you now need to set the Auto Registration choice as a mandatory parameter. Here is how it resembles right now:

root@xxxxxxxx# echo "auto_reg=disable" > /tmp/sysidcfg
root@xxxxxxxx# luupgrade -u -n Sol10u9 -N -s /mnt/ -k /tmp/sysidcfg
63521 blocks
miniroot filesystem is lofs
Mounting miniroot at /mnt//Solaris_10/Tools/Boot
#######################################################################
NOTE: To improve products and services, Oracle Solaris communicates
configuration data to Oracle after rebooting.
You can register your version of Oracle Solaris to capture this data
for your use, or the data is sent anonymously.
For information about what configuration data is communicated and how
to control this facility, see the Release Notes or
www.oracle.com/goto/solarisautoreg.
INFORMATION: After activated and booted into new BE Sol10u9,
Auto Registration happens automatically with the following Information
autoreg=disable
#######################################################################
pktool: Invalid verb: genkey
encrypt: cannot open /var/run/autoreg_key
encrypt: invalid key.
Validating the contents of the media /mnt/
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version 10
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE Sol10u9
Performing the operating system upgrade of the BE Sol10u9
Execute Command: /mnt//Solaris_10/Tools/Boot/usr/sbin/install.d/pfinstall -L /a -c /mnt/ /tmp/.liveupgrade.23762.28500/.luupgrade.profile.upgrade
Adding operating system patches to the BE Sol10u9
Execute Command /mnt//Solaris_10/Tools/Boot/usr/sbin/install.d/install_config/patch_finish -R "/a" -c "/mnt/"

This looks okay. Let’s run it in real.

root@xxxxxxxx# time luupgrade -u -n Sol10u9 -s /sol10u9 -k /tmp/sysidcfg
63521 blocks
miniroot filesystem is lofs
Mounting miniroot at /sol10u9/Solaris_10/Tools/Boot
#######################################################################
NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting.
You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously.
For information about what configuration data is communicated and how
to control this facility, see the Release Notes or
www.oracle.com/goto/solarisautoreg.
INFORMATION: After activated and booted into new BE Sol10u9,
Auto Registration happens automatically with the following Information
autoreg=disable
#######################################################################
pktool: Invalid verb: genkey
encrypt: cannot open /var/run/autoreg_key
encrypt: invalid key.
Validating the contents of the media {/sol10u9}.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version 10.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE Sol10u9
Determining packages to install or upgrade for BE Sol10u9.
Performing the operating system upgrade of the BE Sol10u9.
CAUTION: Interrupting this process may leave the boot environment unstable or unbootable.

ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics:

Processing profile

Loading local environment and services
ERROR: Failure loading local environment
Stat failed: /a///var/sadm/install_data/.clustertoc


INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot environment Sol10u9 contains a log of the upgrade operation.
INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment . Before you activate boot environment , determine if any additional system maintenance is required or if additional media of the software distribution must be installed.
The Solaris upgrade of the boot environment Sol10u9 failed.
Installing failsafe
Failsafe install is complete.

real 0m58.864s
user 0m10.915s
sys 0m23.483s

Ohh.. boy... ERROR!!! Need to find some solution...

To get rid of this issue, I manually copied files /var/sadm/system/admin/.clustertoc & /var/sadm/system/admin/CLUSTER to ABE by mounting it to directory /a/var/sadm/install_data/. Unbelievable!!! Upgrade started working. I’m still not sure why lucreate or in fact pfinstall looking for .clustertoc file at /var/sadm/install_data than its default location /var/sadm/system/admin. The reason might be being u3 is very old and loaded with bugs.

root@xxxxxxxx# time luupgrade -u -n Sol10u9 -s /sol10u9 -k /tmp/sysidcfg
63521 blocks
miniroot filesystem is lofs
Mounting miniroot at /sol10u9/Solaris_10/Tools/Boot
#######################################################################
NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting.
You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously.
For information about what configuration data is communicated and how
to control this facility, see the Release Notes or
www.oracle.com/goto/solarisautoreg.
INFORMATION: After activated and booted into new BE Sol10u9, Auto Registration happens automatically with the following Information
autoreg=disable
######################################################################
pktool: Invalid verb: genkey

encrypt: cannot open /var/run/autoreg_key
encrypt: invalid key.
Validating the contents of the media {/sol10u9}.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains version (10).
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE {Sol10u9}.
Determining packages to install or upgrade for BE {Sol10u9}.
Performing the operating system upgrade of the BE {Sol10u9}.
CAUTION: Interrupting this process may leave the boot environment unstable or unbootable.
Installation of the packages from this media is complete.
Upgrading Solaris: 100% completed
Updating package information on boot environment {Sol10u9}.
Package information successfully updated on boot environment .
Adding operating system patches to the BE {Sol10u9}.
The operating system patch installation is complete.
INFORMATION: The file (/var/sadm/system/logs/upgrade_log) on boot environment contains a log of the upgrade operation.
INFORMATION: The file (/var/sadm/system/data/upgrade_cleanup) on boot environment {Sol10u9} contains a log of cleanup operations required.
WARNING: (1) packages failed to install properly on boot environment {Sol10u9}.
INFORMATION: The file {/var/sadm/system/data/upgrade_failed_pkgadds} on boot environment {Sol10u9} contains a list of packages that failed to upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment . Before you activate boot environment {Sol10u9}, determine if any additional system maintenance is required or if additional media of the software distribution must be installed.
The Solaris upgrade of the boot environment {Sol10u9} is partially complete.
Installing failsafe
Failsafe install is complete.

real 20m4.765s
user 6m53.061s
sys 3m33.470s

- Activate the ABE

root@xxxxxxxx# luactivate Sol10u9
A Live Upgrade Sync operation will be performed on startup of boot environment Sol10u9
WARNING: 1 packages failed to install properly on boot environment Sol10u9
INFORMATION: /var/sadm/system/data/upgrade_failed_pkgadds on boot environment Sol10u9 contains a list of packages that failed to upgrade or install properly. Review the file before you reboot the system to determine if any additional system maintenance is required.
WARNING: The following files have changed on both the current boot environment Sol10u3 zone global and the boot environment to be activated :
/var/mail/root-noc
INFORMATION: The files listed above are in conflict between the current boot environment Sol10u3 zone global and the boot environment to be activated Sol10u9 These files will not be automatically synchronized from the current boot environment Sol10u3 when boot environment Sol10u9 is activated.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:
 setenv boot-device
/pci@8,600000/SUNW,qlc@2/fp@0,0/disk@w500000e014334f61,0:a

3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment Sol10u9 successful.

Let’s reboot the server from activated ABE.

root@xxxxxxxx# sync;sync;sync;shutdown -y -i6 -g0

root@xxxxxxxx# cat /etc/release && echo "--------- `uname -a` ----------"
Oracle Solaris 10 9/10 s10s_u9wos_14a SPARC
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010
--------- SunOS xxxxxxxx 5.10 Generic_144488-12 sun4u sparc SUNW,Sun-Fire-V890 ----------

root@xxxxxxxx# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10u3 yes yes no no -
Sol10u9 yes no yes no -

After system came up – I’m on Solaris 10 U9 & latest patch level. Cool. Milestone #1 is finished. Let’s move forward towards Milestone #2.

Milestone #2 talks about – encapsulating disk into rootdg, created added/extra volumes, restore them (if required), delete PBE and mirror second disk to rootdg, import the data disk group.

To encapsulate the boot drive again run the vxdiskadm command selecting option 2.

# vxdiskadm (option 2)

[Many lines here, skipped for brevity]

The Disk_2 disk has been configured for encapsulation.
The first stage of encapsulation has completed successfully. You should now reboot your system at the earliest possible opportunity. The encapsulation will require two or three reboots which will happen automatically after the next reboot. To reboot execute the command:


shutdown -g0 -y -i6

This will update the /etc/vfstab file so that volume devices are used to mount the file systems on this disk device. You will need to update any other references such as backup scripts, databases, or manually created swap devices.

After server reboot, you will find your operating systems volumes as a part of VERITAS Volume Manager.

root@xxxxxxxx# df -kh
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/bootdg/rootvol 7.9G 5.9G 1.9G 76% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 41G 1.6M 41G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
7.9G 5.9G 1.9G 76% /platform/sun4u-us3/lib/libc_psr.so.1
platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
7.9G 5.9G 1.9G 76% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/vx/dsk/bootdg/var 7.9G 2.6G 5.2G 34% /var
swap 41G 272K 41G 1% /tmp
swap 41G 56K 41G 1% /var/run
swap 41G 0K 41G 0% /dev/vx/dmp
swap 41G 0K 41G 0% /dev/vx/rdmp
/dev/vx/dsk/bootdg/opt 7.9G 6.5G 1.3G 83% /opt

Now create extra volumes which we removed at the time of vxunroot. (opt_openv, oracle, pa, crash & swapvol2)

- Creation of swap volume

root@xxxxxxxx# vxassist -g rootdg make swapvol2 15g
root@xxxxxxxx# swap -a /dev/vx/dsk/rootdg/swapvol2
root@xxxxxxxx# swap -l
swapfile dev swaplo blocks free
/dev/vx/dsk/bootdg/swapvol 270,72002 16 31464176 31464176
/dev/vx/dsk/rootdg/swapvol2 270,72007 16 31457264 31457264

- Create volumes

root@xxxxxxxx# vxassist -g rootdg make opt_openv 2g
root@xxxxxxxx# vxassist -g rootdg make pa 4g
root@xxxxxxxx# vxassist -g rootdg make oracle 10g
root@xxxxxxxx# vxassist -g rootdg make crash 15g

- Create VxFS filesystems

root@xxxxxxxx# for vol in opt_openv pa oracle crash
> do
> mkfs -F vxfs /dev/vx/rdsk/rootdg/$vol
> done
version 6 layout
4194304 sectors, 2097152 blocks of size 1024, log size 16384 blocks
largefiles supported
version 6 layout
8388608 sectors, 4194304 blocks of size 1024, log size 16384 blocks
largefiles supported
version 6 layout
20971520 sectors, 10485760 blocks of size 1024, log size 16384 blocks
largefiles supported
version 6 layout
31457280 sectors, 15728640 blocks of size 1024, log size 16384 blocks
largefiles supported

- Create mountpoints.

root@xxxxxxxx# mkdir -p /opt/openv /opt/pa /opt/oracle

Define entries in vfstab

/dev/vx/dsk/bootdg/crash /dev/vx/rdsk/bootdg/crash /var/crash ufs 2 yes nologging
/dev/vx/dsk/bootdg/oracle /dev/vx/rdsk/bootdg/oracle /opt/oracle vxfs 2 yes -
/dev/vx/dsk/bootdg/opt_openv /dev/vx/rdsk/bootdg/opt_openv /opt/openv vxfs 2 yes -
/dev/vx/dsk/bootdg/pa /dev/vx/rdsk/bootdg/pa /opt/pa vxfs 2 yes -

Mount these volumes with mountall command.

root@xxxxxxxx# df -kh /opt/openv /opt/pa /opt/oracle /var/crash

Filesystem size used avail capacity Mounted on
/dev/vx/dsk/bootdg/opt_openv 2.0G 17M 1.9G 1% /opt/openv
/dev/vx/dsk/bootdg/pa 4.0G 18M 3.7G 1% /opt/pa
/dev/vx/dsk/bootdg/oracle 10G 19M 9.4G 1% /opt/oracle
/dev/vx/dsk/bootdg/crash 15G 20M 14G 1% /var/crash

Well, now we will simply restore the backed up data for these volumes.

Next job is to import the disk group, which we deported at the time of vxunroot.

root@xxxxxxxx# vxdg import adbm

After executing import on DG there might be the case that all volumes belonging to this DG may be in state – “DISABLED CLEAN” so, start all the volumes manually using below command.

root@xxxxxxxx# vxvol -g adbm startall

Now you should be able to see all the volumes as “ENABLED ACTIVE”

Mount the FS belonging to this DG manually using mountall command –

root@xxxxxxxx# df -kh | grep adbm
/dev/vx/dsk/adbm/oracle 32G 75M 30G 1% /adbm/oracle
/dev/vx/dsk/adbm/adbm_db 233G 21G 198G 10% /adbm/db
/dev/vx/dsk/adbm/dumps 291G 216G 70G 76% /adbm/dumps
/dev/vx/dsk/adbm/devel 116G 17G 93G 16% /adbm/devel

Well, this is being 2 weeks now and no complaints so far from customer and he confirmed that everything looks perfect to him. Now we can proceed with post patch and upgrade stuffs.

I found a dirty but quick way to find out current boot disk, so let’s find out what is our boot disk.

root@xxxxxxxx# cat get_bootdisk
BOOTPATH=`prtconf -pv |grep bootpath | tr -d "'" | awk '{print $2}'`
if [ -n "`echo $BOOTPATH | grep "/disk"`" ] ; then
# The bootpath contains "disk," but the /devices block device contains
# either "sd" or "ssd"
BOOTPATH=`echo $BOOTPATH | sed 's/disk@//'`
BOOT_DISK=`ls -l /dev/dsk | sed -e 's/ssd@//' -e 's/sd@//' \ | grep "$BOOTPATH" 2>/dev/null | awk '{print $9}' | sed 's/s[0-7]//'`
else
BOOT_DISK=`ls -l /dev/dsk | grep "$BOOTPATH" 2>/dev/null | awk '{print $9}' | sed 's/s[0-7]//'`
fi

if [ -n "$BOOT_DISK" ] ; then
echo "Your boot disk is ${BOOT_DISK}."
else
echo "Unable to determine logical boot disk."
fi

root@xxxxxxxx# ./get_bootdisk
Your boot disk is c1t2d0.

Now that we don’t have to back out anymore so let’s delete the BE named “Sol10u3”

root@xxxxxxxx# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10u3 yes no no yes -
Sol10u9 yes yes yes no -

root@xxxxxxxx# ludelete Sol10u3
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment Sol10u3 deleted.

Well, deleting boot environment Sol10u3 we got disk c1t0d0 free. Let’s add it to rootdg as mirror disk.

root@xxxxxxxx# vxdisksetup -if Disk_0 format=sliced
root@xxxxxxxx# vxdg -g rootdg adddisk rootmirror=Disk_0
# vxmirror -g rootdg rootdisk
! vxassist -g rootdg mirror swapvol
! vxassist -g rootdg mirror rootvol
! vxassist -g rootdg mirror var
! vxassist -g rootdg mirror opt
! vxassist -g rootdg mirror opt_openv
! vxassist -g rootdg mirror pa
! vxassist -g rootdg mirror oracle
! vxassist -g rootdg mirror crash
! vxassist -g rootdg mirror swapvol2
! vxbootsetup -g rootdg

You may check the copy progress using –

root@xxxxxxxx# vxtask -l list
Task: 168 RUNNING
Type: ATCOPY
Operation: PLXATT Vol opt Plex opt-02 Dg rootdg
Started: Fri Apr 27 10:28:04 2012
Throttle: 0
Progress: 39.37% 6610944 of 16790400 Blocks
Work time: 1 minute, 31 seconds (02:20 remaining)

So here is what we achieved!!!

To conclude this task,

- System has been upgraded to Solaris 10 Update 9
- System has been patched to latest patch level i.e. Generic_144488-12
- Completed post patch/upgrade things and system has now mirror volumes in place.

root@xxxxxxxx# cat /etc/release; echo "=========="; uname -a; echo "========="; vxprint -htg rootdg

Oracle Solaris 10 9/10 s10s_u9wos_14a SPARC
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010
=========================================================
SunOS xxxxxxxx 5.10 Generic_144488-12 sun4u sparc SUNW,Sun-Fire-V890
=========================================================
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME NVOLUME KSTATE STATE
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO

dg rootdg default default 72000 1175781890.8.xxxxxxxx

dm hot01 Disk_4 auto 20095 286657920 SPARE
dm rootdisk Disk_2 auto 4096 286657920 -
dm rootmirror Disk_0 auto 20095 286657920 -

v crash - ENABLED ACTIVE 31457280 SELECT - fsgen
pl crash-01 crash ENABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-08 crash-01 rootdisk 115436543 31464192 0 Disk_2 ENA
pl crash-02 crash ENABLED ACTIVE 31464192 CONCAT - RW
sd rootmirror-08 crash-02 rootmirror 115436544 31464192 0 Disk_0 ENA

v opt - ENABLED ACTIVE 16790400 ROUND - fsgen
pl opt-01 opt ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-03 opt-01 rootdisk 65044991 16790400 0 Disk_2 ENA
pl opt-02 opt ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-04 opt-02 rootmirror 65044992 16790400 0 Disk_0 ENA

v opt_openv - ENABLED ACTIVE 4194304 SELECT - fsgen
pl opt_openv-01 opt_openv ENABLED ACTIVE 4212864 CONCAT - RW
sd rootdisk-05 opt_openv-01 rootdisk 81835391 4212864 0 Disk_2 ENA
pl opt_openv-02 opt_openv ENABLED ACTIVE 4212864 CONCAT - RW
sd rootmirror-05 opt_openv-02 rootmirror 81835392 4212864 0 Disk_0 ENA

v oracle - ENABLED ACTIVE 20971520 SELECT - fsgen
pl oracle-01 oracle ENABLED ACTIVE 20982912 CONCAT - RW
sd rootdisk-07 oracle-01 rootdisk 94453631 20982912 0 Disk_2 ENA
pl oracle-02 oracle ENABLED ACTIVE 20982912 CONCAT - RW
sd rootmirror-07 oracle-02 rootmirror 94453632 20982912 0 Disk_0 ENA

v pa - ENABLED ACTIVE 8388608 SELECT - fsgen
pl pa-01 pa ENABLED ACTIVE 8405376 CONCAT - RW
sd rootdisk-06 pa-01 rootdisk 86048255 8405376 0 Disk_2 ENA
pl pa-02 pa ENABLED ACTIVE 8405376 CONCAT - RW
sd rootmirror-06 pa-02 rootmirror 86048256 8405376 0 Disk_0 ENA

v rootvol - ENABLED ACTIVE 16790400 ROUND - root
pl rootvol-01 rootvol ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-02 rootvol-01 rootdisk 31464191 16790400 0 Disk_2 ENA
pl rootvol-02 rootvol ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-02 rootvol-02 rootmirror 31464192 16790400 0 Disk_0 ENA

v swapvol - ENABLED ACTIVE 31464192 ROUND - swap
pl swapvol-01 swapvol ENABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-B0 swapvol-01 rootdisk 286657919 1 0 Disk_2 ENA
sd rootdisk-01 swapvol-01 rootdisk 0 31464191 1 Disk_2 ENA
pl swapvol-02 swapvol ENABLED ACTIVE 31464192 CONCAT - RW
sd rootmirror-01 swapvol-02 rootmirror 0 31464192 0 Disk_0 ENA

v swapvol2 - ENABLED ACTIVE 31457280 SELECT - fsgen
pl swapvol2-01 swapvol2 ENABLED ACTIVE 31464192 CONCAT - RW
sd rootdisk-09 swapvol2-01 rootdisk 146900735 31464192 0 Disk_2 ENA
pl swapvol2-02 swapvol2 ENABLED ACTIVE 31464192 CONCAT - RW
sd rootmirror-09 swapvol2-02 rootmirror 146900736 31464192 0 Disk_0 ENA

v var - ENABLED ACTIVE 16790400 ROUND - fsgen
pl var-01 var ENABLED ACTIVE 16790400 CONCAT - RW
sd rootdisk-04 var-01 rootdisk 48254591 16790400 0 Disk_2 ENA
pl var-02 var ENABLED ACTIVE 16790400 CONCAT - RW
sd rootmirror-03 var-02 rootmirror 48254592 16790400 0 Disk_0 ENA

Sorry for publishing so long document however it was needed to record each and every part of the procedure. I hope someone finds this helpful somewhere.

5 comments:

  1. Nice post ... your all articles are really good

    Reg
    KK

    ReplyDelete
  2. Great article. Could you tell me how you performed the action described here?: To get rid of this issue, I manually copied files /var/sadm/system/admin/.clustertoc & /var/sadm/system/admin/CLUSTER to ABE by mounting it to directory /a/var/sadm/install_data/.

    ReplyDelete
  3. Using AVG protection for a couple of years now, I'd recommend this anti-virus to all you.

    ReplyDelete