Find it

Showing posts with label Sun Solaris Zones/Conatiners. Show all posts
Showing posts with label Sun Solaris Zones/Conatiners. Show all posts

Saturday, November 2, 2013

Adding UFS, ZFS, VxVM FS, Raw FS, LOFS to Non-Global Zone - Some useful examples

In day to day administration we deal with these tasks like adding a raw device to zone, delegating ZFS Datasets to a Non-Global Zone, Adding FS/Volume etc.

In this post I'll be only talking about diffrent types of filesystem operations associated to zones.

Before we start I would like reiterate - Zones are cool and dynamic !!!

So let's start with -

Adding UFS filesystem to Non-Global Zone
______________________________
______

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set special=/dev/md/dsk/d100
zonecfg:zone1:fs> set raw=/dev/md/rdsk/d100
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> add options [nodevices,logging]
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit


Adding ZFS filesystem/dataset/Volume to Non-Global Zone
_____________________________________________
_____

Points to ponder before associating ZFS datasets with zones -

  • Can add a ZFS file system or a clone to a non-global zone with or without delegating administrative control.
  • Can add a ZFS volume as a device to non-global zones.
  • Cannot associate ZFS snapshots with zones
  • A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy. If the filesystem is created in the global zone and added to the local zone via zonecfg, it may be assigned to more than one zone unless the mountpoint is set to legacy.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=zfs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


Adding ZFS filesystem via lofs filesystem
__________________________________________


In a order to use lofs, actual zfs filesystem should be mounted in global zone.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


global # mkdir -p /zoneroot/zone1/root/u01
global # mount -F lofs /rpool/oradata-u01 /zoneroot/zone1/root/u01

global # zlogin zone1 df -h /u01
Filesystem             size   used  avail capacity  Mounted on
/oradata-u01             3G    21K   3G     1%      /u01

Delegating Datasets to a Non-Global Zone
_________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add dataset
zonecfg:zone1:dataset> set name=dpool/oradata-u01
zonecfg:zone1:dataset> set alias=oradata-pool
zonecfg:zone1:dataset> end


Within the zone1 zone, this file system is not accessible as dpool/oradata-u01, but as a virtual pool named oradata-pool. The zone administrator is able to set properties on the dataset, as well as create children. It allows the zone administrator to take snapshots, create clones, and otherwise control the entire namespace below the added dataset.

Adding ZFS Volumes to a Non-Global Zone
________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/zvol/dsk/dpool/oradata/u01
zonecfg:zone1:device> end


Adding VxVM filesystem to Non-Global Zone
___________________________________________

global # zonecfg -z zone1
zonecfg:zone11> add fs
zonecfg:zone1:fs> set type=vxfs
zonecfg:zone1:fs> set special=/dev/vx/dsk/oradg/u01
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/oradg/u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> commit
zonecfg:zone1> verify
zonecfg:zone1> exit


Create & Add UFS filesystem on VxVM volume
___________________________________________


global # vxassist -g zone1_dg make home-ora1-zone1 1g
global # mkfs -F ufs /dev/vx/rdsk/zone1_dg/home-ora1-zone1 2097152


NOTE: 2097152 is sector size.

global # mount -F ufs /dev/vx/dsk/zone1_dg/home-ora1-zone1 /zones/zone1/root/home/oradata/ora1

=========================================================================

Adding the filesystem to Non-Global Zone
____________________________________

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> set special=/dev/vx/dsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set dir=/zones/zone1/root/home/oradata/ora1
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
 

global # zlogin zone1 df -k | ora1
/home/oradata/ora1    986095    1041  886445     1%    /home/oradata/ora1


Adding raw device to Non-Global Zone
______________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
zonecfg:zone1:device> end
zonecfg:zone1>exit


Ideally we need to reboot non-global zone in order to see added raw device however there is a hack available to do it dynamically. See - Dynamically-adding-raw-device-to-Non-global-zone

Well, this is it. Hope this helps to our community friends for their day to day work!

BTW, In India it's a festive season, Diwali celebration time!!!!
So all my friends - Wish you & you’re your family a very happy, prosperous & safe Diwali.

Enjoy !!!!

Friday, May 14, 2010

Migrating zones between sun4u and sun4v systems

Migrating zones between sun4u and sun4v systems
============================================

I've recently started off with my new project which is a mixture of UFS --> ZFS migration, Zones/Containers migration from one host to another & patching. The real challenge is I've to do it with minimum downtime & have to be "real fast & accurate" at execution.

As I've already started with this project so before jump into project I done some detail study on few subjects related to this project so thought of publishing my findings on my blog.

The first question came in my mind is - If the zone is residing on V890 i.e sun4u arch & I've to move it to SPARC-Enterprise-T5120 i.e. sun4v arch then is it supported & if yes then how it can be done? Below paragraph talks about it.

A recent (not that recent) RFE to make attach work across sun4u and sun4v - 6576592 RFE: zoneadm detach/attach should work between sun4u and sun4v architecture.
Starting with the Solaris 10 10/08 release, zoneadm attach with the -u option also enables migration between machine classes, such as from sun4u to sun4v.

Note for Solaris 10 10/08: If the new host has later versions of the zone-dependent packages and their associated patches, using zoneadm attach with the -u option updates those packages within the zone to match the new host. The update on attach software looks at the zone that is being migrated and determines which packages must be updated to match the new host. Only those packages are updated. The rest of the packages, and their associated patches, can vary from zone to zone.

This option also enables automatic migration between machine classes,such as from sun4u to sun4v.

Okay now when I'm all clear with this doubt so let's move ahead with looking at how to do the migration & what all steps are involved to do so.

Overview -


Migrating a zone from one system to another involves the following steps:

1. Detaching the Zone. This leaves the zone on the originating system in the "configured" state. Behind the scenes, the system will generate a "manifest" of the information needed to validate that the zone can be successfully attached to a new host machine.

2. Data Migration or if your zones are on SAN then re-zone those LUNs. At this stage we may choose to move the data or rezone the storage LUNs which represents the zone to a new host system.

3. Zone Configuration. at this stage we have to create the zone configuration on the new host using zonecfg command.

4. Attaching & if required update (-u) the zone. This will validate that the host is capable of supporting the zone before the attach can succeed. The zone is left in the "installed" state.

5. Boot the zone & have a fun as here you completes the zone migration.

Let's talk more about point #2.


How to Move the zonepath to a new Host?

There are several ways to create an archive of the zonepath. You can use the cpio or pax commands/utilities to archive your zonepath.

There are also several ways to transfer the archive to the new host. The mechanism used to transfer the zonepath from the source host to the destination depends on the local configuration. One can go for SCP, FTP or if it's on ZFS then zfs send/receive etc.

In some cases, such as a SAN, the zonepath data might not actually move. The SAN might simply be reconfigured so the zonepath is visible on the new host. This is what we do in our environment & that's the reason I prefer to have zoneroot on SAN.

Try before you do

Starting from Solaris 10 5/08, You can perform a trial run before the zone is moved to the new machine by using the “no execute” option,-n.


Here is the details how it actually works -

The zoneadm detach subcommand is used with the -n option to generate a manifest on a running zone without actually detaching the zone. The state of the zone on the originating system is not changed. The zone manifest is sent to stdout.

Then we can direct this output to a file or pipe it to a remote command to be immediately validated on the target host. The zoneadm attach subcommand is used with the -n option to read this manifest and verify that the target machine has the correct configuration to host the zone without actually doing an attach.

The zone on the target system does not have to be configured on the new host before doing a trial-run attach.

E.g.
gz1_source:/
# uname -m
sun4u

gz1_dest:/
# uname -m
sun4v

gz1_source:/
# zoneadm list -icv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
7 zone1 running /zone1/zonepath native shared

 
gz1_source:/
# zoneadm -z zone1 detach -n | ssh gz1_dest zoneadm attach -n -

The validation is output to the source host screen, which is stdout.

I hope this information will help me to get started with project work.

Tuesday, March 9, 2010

Solaris – Add/remove network interface to a running zone (dynamic Change)

Solaris – Add/remove network interface to a running zone (dynamic Change)


This will describe how to add a network interface to a running non-global zone, without having to reboot the zone. The new interface will persist between reboots.

First you add the entry to the zone configuration. This is the part that lets it persist between reboots. This is done from the global zone:

# zonecfg -z zone1
zonecfg:slabzone1> add net
zonecfg:slabzone1:net> set address=XXX.XXX.XX.XXX
zonecfg:slabzone1:net> set physical=bge0
zonecfg:slabzone1:net> end
zonecfg:slabzone1> verify
zonecfg:slabzone1> commit
zonecfg:slabzone1> exit

Now we have to manually add a new interface to the running zone. Do this from the global zone as well

# ifconfig bge0 addif XXX.XXX.XX.XXX netmask XXX.XXX.X.X zone zone1 up

Created new logical interface bge0:3

Note: The ‘addif’ tells ifconfig to create a logical interface using the next available.

# ifconfig -a
lo0:1: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000
bge0:1: flags=1000843 mtu 1500 index 2 inet XXX.XXX.XX.XXX netmask ffffff00 broadcast XXX.XXX.XX.XXX
bge0:3: flags=1000843 mtu 1500 index 2 inet XXX.XXX.XX.XXX netmask ffffff00 broadcast XXX.XXX.XX.XXX

That's it! you're done.

In case you want to remove the interface -

To remove the interface from a running zone. From the global zone, remove the interface. You must first determine which logical interface [alias] you wish to remove.

# ifconfig bge0:3 down
# ifconfig bge0:3 unplumb
# zonecfg -z zone1
zonecfg:slabzone1> remove net address=XXX.XXX.XX.XXX
zonecfg:slabzone1> commit
zonecfg:slabzone1> exit

Done!

Saturday, January 16, 2010

Debug tip in case if container is hung while shutdown.

Debug tip in case if container is hung while shutdown.

Sometimes if you have NFS share mounted to NGZ through GZ & if you initiate shutdown to that container then most of the times it hangs & stuck at shutting_down state. If zone hung nfs mount one should be able to see it still mounted in the /etc/mntab file in the global zone: grep nfs /etc/mntab. It will be mounted under the zonepath.  You should then be able to do a umount -f / from the global zone and if you're really lucky the zone will finish shutting down.

Also see if any process is holding zone getting shutdown & try to kill them.  Yet another good command is truss, this will be a helpful while performing debugging. Like when you initiate shutdown then it start some process so you can simply truss on that process ID & see what it is exactly doing & where it is stuck.

If above tip won't work then you can use mdb to debug further -

Sun Container ref_count

# mdb -k
::walk zone | ::print zone_t zone_name zone_ref

The zone_ref > 1 means that something in the kernel is holding the zone.

# mdb -k
::fsinfo

# mdb -k
::kmem_cache | grep rnode
ffffffffa6438008 rnode_cache               0200 000000      656    70506
ffffffffa643c008 rnode4_cache              0200 000000      968        0

Then run -

ffffffffa6438008::walk kmem | ::print rnode_t r_vnode | ::wnode2path

See if this gives any hints for solution. The out put from this command may show you few files/filesystems which may be hold back from the zone & it is causing shutting down zone.

In case if nothing workout then you have to take a call & recycle GZ server.

One important thing I came to know out of this experience that - zsched process is always unkillable.  It will only exit when instructed to by zoneadmd.

Monday, December 28, 2009

How to expand Solaris Volume Manager filesystem which is exported to zones from Global Zone

Ok, it's been long that no updates on blog... Anyways, Today I'm having some good information on - "How to expand Solaris Volume Manager (Metadevice) filesystem which is exported to zones from Global Zone"

I've a uniqe system which is having little diffrent configuration than other systems. I've SPARC-Enterprise M4000 system and having 2 zones running on it. Here is the zone configuration example for one of them.

# zonecfg -z zone1 info
zonename: zone1
zonepath: /zone1/zonepath
brand: native
autoboot: true
bootargs:
pool: oracpu_pool
limitpriv: default,dtrace_proc,dtrace_user
scheduling-class:
ip-type: shared
[cpu-shares: 32]
fs:
dir: /oracle
special: /dev/md/dsk/d56
raw: /dev/md/rdsk/d56
type: ufs
options: []
fs:
dir: /oradata1
special: /dev/md/dsk/d59
raw: /dev/md/rdsk/d59
type: ufs
options: []
fs:
dir: /oradata2
special: /dev/md/dsk/d62
raw: /dev/md/rdsk/d62
type: ufs
options: []
fs:
dir: /oradata3
special: /dev/md/dsk/d63
raw: /dev/md/rdsk/d63
type: ufs
options: []
[...]


Ok, So here you can see that I've metadevices which are exported to the zone from the global zone. I need to expand one of the filesystem say /oaradata1 by XXG so how am I going to perform this? Take a look at below procedure to understand on how we can do it.

global:/
# zonecfg -z zone1 info fs dir=/oradata1
fs:
dir: /oradata1
special: /dev/md/dsk/d59
raw: /dev/md/rdsk/d59
type: ufs
options: []
global:/
# metattach d59 Storage_LUN_ID 

global:/
# growfs -M /zone1/zonepath/root/oradata1 /dev/md/rdsk/d59


This all operation needs to be performed from global zone.

Thursday, September 17, 2009

I was wrong... My UNIX Guru Alex shown me the way!!! - Adding capped-memory to container "on-the-fly"

Yesterday I got a situation to perform - adding capped-memory to running container, before going for it I replied to end user asking for container downtime to perform this task, however later in late evening same day I saw Alex email in my mailbox explaining - "You don't need to reboot a Solaris container to increase its memory" and provided detailed step by step execution. I would like to take this opportunity to say thanks a lot to Alex publically... I am blessed with such a wonderful UNIX Guru...

Before going for this task my assumptions were as front - I was under impression that prctl command will do the temporary effect and will not remain across the reboot. This is what my understanding - prctl is an “on-the-fly” way to temporarily set Resource Control assignments & only after reboot modified parameter get permanent.

Then Alex replied explaining how exactly it works...

prctl, rcapadm modifies the running zone.

zonecfg defines the resource parameters of the zone when it boots.

So... To make a change dynamically you:
1) Update the zonecfg. The reason you do this is so that when rebooted it doesn't revert back to the old settings.
2) Use the prctl, rcapadm commands to modify the zone while it is online. The data you feed into prctl and rcapadm should match the changes you've made to zonecfg.


Below are the detail steps to add capped-memory to running container –

# zonecfg -z XXXXXX
zonecfg:XXXXXX> select capped-memory
zonecfg:XXXXXX:capped-memory> info
capped-memory:
physical: 1G
[swap: 2G]
[locked: 512M]
zonecfg:XXXXXX:capped-memory> set physical=2g
zonecfg:XXXXXX:capped-memory> set swap=3g
zonecfg:XXXXXX:capped-memory> info
capped-memory:
physical: 2G
[swap: 3G]
[locked: 512M]
zonecfg:XXXXXX:capped-memory> end
zonecfg:XXXXXX> exit

Now modify the zones runtime settings:

XXXXXX:/
# rcapadm -z XXXXXX -m 2048m

XXXXXX:/
# sleep 60

XXXXXX:/
# rcapstat -z 1 1
id zone nproc vm rss cap at avgat pg avgpg
9 XXXXXX - 434M 377M 2048M 0K 0K 0K 0K
10 XXXXXX - 452M 370M 2048M 0K 0K 0K 0K
14 XXXXXX - 532M 328M 2048M 0K 0K 0K 0K

XXXXXX:/
# prctl -n zone.max-swap -v 3g -t privileged -r -e deny -i zone XXXXXX

Then verify you're settings have taken effect:

XXXXXX:/
# zlogin XXXXXX
[Connected to zone 'XXXXXX' pts/9]
Last login: Wed Sep 16 06:34:27 from XXX.XXX.XX.XX
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
WARNING: YOU ARE SUPERUSER on XXXXXX!!
Your shell is /usr/bin/ksh

XXXXXX:/
# top -c
load averages: 1.36, 0.67, 0.59; up 101+08:46:15 08:55:12
51 processes: 50 sleeping, 1 on cpu
CPU states: 89.6% idle, 4.4% user, 6.1% kernel, 0.0% iowait, 0.0% swap
Memory: 2048M phys mem, 177M free mem, 3072M swap, 2627M free swap

It taught me a new lesson and at the same time I am still wondering “AM I KNOWING THIS BEFORE OR I WAS JUST LOST???” SHAME ON ME, VERY DISAPPOINTING HOWEVER THIS CONCEPT IS NOW GOT HARDCODED IN MY LITTLE BRAIN….

Thanks Alex, thanks a lot.

Hope this will help someone, somewhere!

Tuesday, August 18, 2009

How to tell if server is global or non-global zone/How to Tell If the Solaris Zone is a Whole Root or Sparse Zone

On day to day basis many times wee need to know if the server we are working on is GZ or NGZ or may be sometimes for performing some automation related to such condition, pkgcond command can be very much helpful in such cases.

# pkgcond is_nonglobal_zone
# echo $?
1
# pkgcond is_global_zone
# echo $?
0

Where 1 is false and 0 is true. So here pkgcond is_nonglobal_zone output is 1 it means it is a global zone!

-----------------------------------------------------------------------------------

# pkgcond
no condition to check specified; usage is:
pkgcond [-nv] [ ]

command options:
-n - negate results of condition test
-v - verbose output of condition testing

may be any one of:
can_add_driver [path]
can_remove_driver [path]
can_update_driver [path]
is_alternative_root [path]
is_boot_environment [path]
is_diskless_client [path]
is_global_zone [path]
is_mounted_miniroot [path]
is_netinstall_image [path]
is_nonglobal_zone [path]
is_path_writable path
is_running_system [path]
is_sparse_root_nonglobal_zone [path]
is_what [path]
is_whole_root_nonglobal_zone [path]

are specific to the condition used

pkgcond -?
- Shows this help message

So same can be achieved for finding if zone is a whole root or Sparse Zone.

# pkgcond is_whole_root_nonglobal_zone
# echo $?
0
# pkgcond is_sparse_root_nonglobal_zone
# echo $?
1


So here it clearly shows the zone is whole root zone.

Tuesday, July 14, 2009

Simple, very basic however useful for beginners. - Zone creation with LOFS

bash-3.00# zonecfg -z New_Prod_Test
New_Prod_Test: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:New_Prod_Test> create
zonecfg:New_Prod_Test> set zonepath=/Zones/NEW_PROD_TEST
zonecfg:New_Prod_Test> set autoboot=true
zonecfg:New_Prod_Test> add net
zonecfg:New_Prod_Test:net> set address=10.50.8.95
zonecfg:New_Prod_Test:net> set physical=e1000g0
zonecfg:New_Prod_Test:net> end
zonecfg:New_Prod_Test> add fs
zonecfg:New_Prod_Test:fs> set dir=/usr/local
zonecfg:New_Prod_Test:fs> set special=/Zones/NEW_PROD_TEST/local
zonecfg:New_Prod_Test:fs> set type=lofs
zonecfg:New_Prod_Test:fs> set options=[rw,nodevices]
zonecfg:New_Prod_Test:fs> end
zonecfg:New_Prod_Test> verify
zonecfg:New_Prod_Test> commit
zonecfg:New_Prod_Test> exit