Migrating zones between sun4u and sun4v systems
============================================
I've recently started off with my new project which is a mixture of UFS --> ZFS migration, Zones/Containers migration from one host to another & patching. The real challenge is I've to do it with minimum downtime & have to be "real fast & accurate" at execution.
As I've already started with this project so before jump into project I done some detail study on few subjects related to this project so thought of publishing my findings on my blog.
The first question came in my mind is - If the zone is residing on V890 i.e sun4u arch & I've to move it to SPARC-Enterprise-T5120 i.e. sun4v arch then is it supported & if yes then how it can be done? Below paragraph talks about it.
A recent (not that recent) RFE to make attach work across sun4u and sun4v - 6576592 RFE: zoneadm detach/attach should work between sun4u and sun4v architecture.
Starting with the Solaris 10 10/08 release, zoneadm attach with the -u option also enables migration between machine classes, such as from sun4u to sun4v.
Note for Solaris 10 10/08: If the new host has later versions of the zone-dependent packages and their associated patches, using zoneadm attach with the -u option updates those packages within the zone to match the new host. The update on attach software looks at the zone that is being migrated and determines which packages must be updated to match the new host. Only those packages are updated. The rest of the packages, and their associated patches, can vary from zone to zone.
This option also enables automatic migration between machine classes,such as from sun4u to sun4v.
Okay now when I'm all clear with this doubt so let's move ahead with looking at how to do the migration & what all steps are involved to do so.
Overview -
Migrating a zone from one system to another involves the following steps:
1. Detaching the Zone. This leaves the zone on the originating system in the "configured" state. Behind the scenes, the system will generate a "manifest" of the information needed to validate that the zone can be successfully attached to a new host machine.
2. Data Migration or if your zones are on SAN then re-zone those LUNs. At this stage we may choose to move the data or rezone the storage LUNs which represents the zone to a new host system.
3. Zone Configuration. at this stage we have to create the zone configuration on the new host using zonecfg command.
4. Attaching & if required update (-u) the zone. This will validate that the host is capable of supporting the zone before the attach can succeed. The zone is left in the "installed" state.
5. Boot the zone & have a fun as here you completes the zone migration.
Let's talk more about point #2.
How to Move the zonepath to a new Host?
There are several ways to create an archive of the zonepath. You can use the cpio or pax commands/utilities to archive your zonepath.
There are also several ways to transfer the archive to the new host. The mechanism used to transfer the zonepath from the source host to the destination depends on the local configuration. One can go for SCP, FTP or if it's on ZFS then zfs send/receive etc.
In some cases, such as a SAN, the zonepath data might not actually move. The SAN might simply be reconfigured so the zonepath is visible on the new host. This is what we do in our environment & that's the reason I prefer to have zoneroot on SAN.
Try before you do
Starting from Solaris 10 5/08, You can perform a trial run before the zone is moved to the new machine by using the “no execute” option,-n.
Here is the details how it actually works -
The zoneadm detach subcommand is used with the -n option to generate a manifest on a running zone without actually detaching the zone. The state of the zone on the originating system is not changed. The zone manifest is sent to stdout.
Then we can direct this output to a file or pipe it to a remote command to be immediately validated on the target host. The zoneadm attach subcommand is used with the -n option to read this manifest and verify that the target machine has the correct configuration to host the zone without actually doing an attach.
The zone on the target system does not have to be configured on the new host before doing a trial-run attach.
E.g.
gz1_source:/
# uname -m
sun4u
gz1_dest:/
# uname -m
sun4v
gz1_source:/
# zoneadm list -icv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
7 zone1 running /zone1/zonepath native shared
gz1_source:/
# zoneadm -z zone1 detach -n | ssh gz1_dest zoneadm attach -n -
The validation is output to the source host screen, which is stdout.
I hope this information will help me to get started with project work.
Does nt work.
ReplyDeletezoneadm -z testzone4 detach -n|ssh 143.222.51.245 zoneadm attach -n -
zoneadmSegmentation Fault - core dumped
Hi,
ReplyDeleteMay I please know your Solaris Update version? There is a known bug for Solaris 10 U7.
http://wesunsolve.net/bugid/id/6845531
If not U7 then after you ran this command followed by segmentation fault - see if it has created any core dump file. If it's there then pstack it and see if you get any hints. Also re-run this command using truss and see if it gives any pointers.
NOTE - You will have to analyze the truss output carefully and truss output will may be quite hugh.
Best of Luck!
Nilesh
Hi,
ReplyDeleteIf you are in a release updated than Solaris 10 U8
unset LD_LIBRARY_PATH
And try again. It will do the trick.
Regards,
JJ
Gr8. this is something new to learn. Thanks JJ.
ReplyDeleteIts impressive to know something about your note on sun solaris Course. Please do share your articles like this your articles for our awareness. Mostly we do also provide Online Training on Cub training sun solaris course.
ReplyDelete