Find it

Friday, July 31, 2009

Breaking root mirror in Sun Solaris

Few days back I posted notes on creating mirror for root and rest of the FS. It is equally important to know how to break them so thought of sharing this info.

Breaking root mirror

1. #metastat d0 [All d?0 devices should be present and 'Okay'.]

2. If breaking the root mirror (as in this example) do this:

#metaroot /dev/dsk/c0t0d0s0 # Alters /etc/system and /etc/vfstab; clean
# up (if desired) and update SunFiles.
# else do this:

vi /etc/vfstab # Change the vfstab line specifying the mirror
# metadevice to instead use the actual device path

3. De-attach

#metadetach -f d10 d20 # Detaches the master mirror, which represents
# the original mounted disk to mirror.

# NOTE: Before rebooting (next step) sanity check /etc/system and /etc/vfstab.
For /etc/system, make sure there is no uncommented 'rootdev' SDS
setting. For /etc/vfstab, make sure the device paths for '/' no longer
reference metadevices. System will not boot properly if metaroot did
not properly update these files.

#init 6

#metaclear -r d10 # Clears d10 and d30.
#metaclear d20 # Clears d20 (previously detached).
#metastat d10 # You should see no d?0 devices.

You done breaking root mirror.

Thursday, July 30, 2009

Oracle licensing

Oracle Licensing

This may not be a complete information but my small brain able to capture this much of information. As a System Administrator or Oracle DBA one should know about this details so posting on blog.

You pay per Processor you run the Oracle software on; however Oracle has a special definition of Processor which is consistent with Intel, AMD, HP and IBM but does not match Sun's one.

Intel Hyper-threading technology that makes one core look like two counts as 1 processor for this purpose. Other soft partitioning technologies are treated differently; for example, the Solaris OS has a concept of Containers, this is similar to hard partitioning on an HP machine; however Oracle does not recognize software partitioning with Solaris Containers prior to Solaris 10; and even then there are stipulations. Hard partitioning methods such as Sun's Domains, IBM's Logical partitioning are recognized as legitimate methods to limit the amount of resources that can run the Oracle software.

Multi-core processors are priced as (number of cores)*(multi-core factor) processors, where the multi-core factor is:

0.25 for SUN's UltraSPARC T1 processors (1.0 GHz or 1.2 GHz)
0.50 for other SUN's UltraSPARC T1 processors (e.g. 1.4 GHz)
0.50 for Intel and AMD processors
0.75 for SUN's UltraSPARC T2 processors
0.75 for all other multi-core processors
1.00 for single-core processors 

The complete factor table can be found at -

For example, a SUN UltraSparc T1 system with 4 x eight-core processors will require 4*8*0.25 = 8 licenses. This is just an example. Sun UltraSparc T1 and T2 are not SMP capable, only Sun UltraSparc T2+ is SMP capable. Similarly, an IBM AIX system with 4 x eight-core processors will require 4*8*0.75 = 24 licenses.

For more details -

License detection under Oracle 

Number of users and CPU/Processors:

SQL>select * from v$license; 

Database edition installed:

SQL>select banner from v$version where BANNER like '%Edition%'; 

Oracle Partitioning installed:

SQL>select decode(count(*), 0, 'No', 'Yes')
from dba_part_tables
where owner not in ('SYSMAN', 'SH', 'SYS', 'SYSTEM') and rownum = 1;

Oracle Spatial installed:

SQL>select decode(count(*), 0, 'No', 'Yes')
from all_sdo_geom_metadata where rownum = 1;

Oracle RAC installed:

SQL>select decode(count(*), 0, 'No', 'Yes')
from v$active_instances where rownum <= 2;

Fair Share Scheduler

Fair Share Scheduler
Before zones, the Fair Share Scheduler (FSS) was used to control how many CPU cycles are assigned by the system to active applications or workloads.The administrator could create projects to identify those workloads and assign CPU shares to them. The FSS scheduler, if used, would then guarantee that if these projects compete with each other for the same set of CPUs, each project would get a fraction of all CPU cycles that is proportional to the ratio between its number of shares and the total amount of shares given to all other active projects.

NOTE: The FSS scheduling class is not enabled by default.The FSS scheduler can be set to be the default scheduling class for the whole system by using the -d option for dispadmin command and rebooting:

global# dispadmin -d FSS
global# reboot

If reboot is not desireable, then all processes can be manually moved to the FSS scheduling class by using priocntl command:

global# priocntl -s -c FSS -i class TS
global# priocntl -s -c FSS -i class IA
global# priocntl -s -c FSS -i pid 1

you can run the following command and look at the 4th column:

global# ps -cafe
root 101050 101039 FSS 59 17:14:59 ? 0:00 /sbin/init
root 101169 101167 FSS 29 17:15:07 ? 0:00 /usr/lib/saf/ttymon

If you want to use FSS to divide CPU cycles between zones, here's what you need to do. You can assign shares to zones permanently with zonecfg command, or change the assignment dynamically with prctl command. Both of these operations can be only done from the global zone. In the following example, zone "zone1" gets 15 shares statically assigned to it with zonecfg command:

global# zonecfg -z zone1
zonecfg:zone1> add rctl
zonecfg:zone1:rctl> set name=zone.cpu-shares
zonecfg:zone1:rctl> add value (priv=privileged,limit=15,action=none)
zonecfg:zone1:rctl> end
zonecfg:zone1> exit

This way, CPU shares will be automatically assigned to the zone at zone boot time. If the zone is already running, CPU shares can be assigned to it without rebooting by using prctl command:

global# prctl -r -n zone.cpu-shares -v 15 -i zone z1

To confirm that the number of CPU shares has changed, prctl can be used from within zone "z1":

z1# prctl -n zone.cpu-shares $$
process: 101439: zsh
privileged 15 - none -
system 65.5K max none -
Setting up Solaris Volume Manager For the root slice only

1. Create identical partitioning on the second disk
#dd if=/dev/rdsk/c0t0d0s2 of=/dev/rdsk/c0t1d0s2 count=16
(Here you can use prtvtoc and fmthard command too)
2. Setup the state databases with 3 backups per slice on each disk
#metadb -a -f -c 3 c0t0d0s7 c0t1d0s7

Description Mirror Name Device Name
1st mirror d10 c0t0d0s0
2nd mirror d20 c0t1d0s0
Metamirror d0 /dev/md/dsk/d0

1. Create submirrors for the mirrors
a) #metainit -f d10 1 1 c0t0d0s0
b) #metainit d0 -m d10
c) #metainit d20 1 1 c0t1d0s0

2. Make a backup of the /etc/vfstab file

3. Let the metaroot command make the /etc/vfstab and /etc/system changes for you
#metaroot d0

4. Run lockfs to prevent problems
#lockfs -fa

5. Shutdown the server
#/usr/sbin/shutdown -y -g0 -i0

6. From the boot prompt, run

7. Pick your mirrored disk from the list and then setup an alias like so
OK>nvalias mirror ^y (that is a -y to paste the device path)

8. Change your boot-device to first try the normal disk alias,
then use your mirror disk
OK>setenv boot-device disk mirror

9. Reset/reboot the server

10. Attach the submirror to the metamirror
#metattach d0 d20
(NOTE: There will be lots of disk I/O)

11. Do ls and copy the info down as this is the alternate boot path
#ls -l /dev/rdsk/c0t1d0s0

NOTE: With an eye towards recovery in case of a future disaster it may be a good idea to find out the physical device path of the root partition on the second disk in order to create an Open Boot PROM (OBP) device alias to ease booting the system if the primary disk fails. In order to find the physical device path, simply do the following:

# ls -l /dev/dsk/c0t1d0s0
This should return something similar to the following:


Using this information, create a device alias using an easy to remember name such as altboot. To create this alias, do the following in the Open Boot PROM:

ok nvalias altboot /sbus@3,0/SUNW,fas@3,8800000/sd@1,0:a

If you only have two mirrored root disks, put this setting in your /etc/system:

set md:mirrored_root_flag=1

All right so you may need to do same for rest of the filesystems like /var, /home etc.

For all slices except root
Description Mirror Name Device Name
1st mirror d40 c0t0d0s4
2nd mirror d50 c0t1d0s4
Metamirror d3 /dev/md/dsk/d3

1. Create submirrors for the mirrors
a) #metainit -f d40 1 1 c0t0d0s4
b) #metainit d3 -m d4
c) #metainit d50 1 1 c0t1d0s4

2. Make a backup of the /etc/vfstab file before editing it

3. Make the following change in the /etc/vfstab file
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs 1 no logging

4. #reboot

5. Attach the submirror to the metamirror
#metattach d3 d50
(NOTE: There will be lots of disk I/O)

Just for a ref. here is the config.

mirror configuration example
d0 - mirror of /, composed of two submirrors:
d20 - c0t0d0s0 (master)
d10 - c0t1d0s0 (replica)
d1 - mirror of /usr, composed of two submirrors:
d21 - c0t0d0s1 (master)
d31 - c0t1d0s1 (replica)
d3 - mirror of /var, composed of two submirrors:
d23 - c0t0d0s4 (master)
d13 - c0t1d0s4 (replica)
d4 - mirror of swap, composed of two submirrors:
d24 - c0t0d0s6 (master)
d14 - c0t1d0s6 (replica)

I hope it will help someone!

Thursday, July 23, 2009

Tiny tip on - Viewing run time logs on VT100 terminal.

9600baud serial connection & especially VT100 terminal sometime behaves very strange and most of the folks find it difficult to work on it. If there is a situation where you need to analyze the run time logs using tail -f log_file_name then it behaves weird sometimes, use below code to view them properly

#while :; do
>tail -5 /var/log/log.log
>sleep 10

This will help you to view logs properly and help you to know when it is over!

How to check that your root FS is mirrored using SVM

How to check that your root FS is mirrored using SVM.

There are 2 ways or more than that.. (I know 2)

1. metastat, metastat -p
2. grep root /etc/system [O/P - rootdev:/pseudo/md@0:0,0,blk]

Oracle Parameters on Solaris 10

With the availability of the Solaris 10 operating system, the way IPC facilities (e.g., shared memory, message queues, etc.) are managed changed. In previous releases of the Solaris operating system, editing /etc/system was the recommended way to increase the values of a given IPC tunable. With the release of Solaris 10, IPC tunable are now managed through the Solaris resource manager. The resource manager makes each tunable available through one or more resource controls, which provide an upper bound on the size of a given resource.

below diagram shows recommended parameter values for Oracle on Sun Solaris 10 OS.

Tuesday, July 21, 2009

Sun Firmware Upgrade for LOM/ALOM

I recently came across this task so thought of putting on the blog.

1. Use the telnet, rlogin or ssh command through the network to log into the
system as superuser.

Caution: Do not attempt this procedure while logged into the system
through the SERIAL MGT port.

2. Change directories as follows:

# cd /usr/platform/`uname -i`/lib

3. If there is no images subdirectory, create one:

# mkdir images

4. Change to the images directory:

# cd images

5. Place this file in the images directory:


6. Unpack the tar file: gzcat ALOM_1.6.9_fw_hw0.tar.gz | tar xf -
The following files will be created:
README (this file)
Legal/ (directory containing Licence, Entitlement and Third Party Readmes)
alombootfw (boot image file)
alommainfw (main image file)

7. Load the boot image file alombootfw into the System Controller hardware:
# /usr/platform/`uname -i`/sbin/scadm download boot alombootfw

8. When the scadm utility completes, wait 60 seconds.

9. Load the main image file alommainfw into the System Controller hardware:

# /usr/platform/`uname -i`/sbin/scadm download alommainfw

Approximately 120 seconds after the scadm utility completes, ALOM
is available for use.

10. Delete the tar file:

# rm ALOM_1.6.9_fw_hw0.tar.gz

Monday, July 20, 2009

Check Veritas Volume Manager Version.

I am pretty new to Veritas Vloume Manager, and just few mins. back I came to know on how to checkout it's version. Sharing info with you all -

pkginfo -l VRTSvxvm |grep VERSION |awk '{print $2}' |awk -F, '{print $1}'


# pkginfo -l VRTSvxvm |grep VERSION
VERSION: 4.0,REV=12.06.2003.01.35

# pkginfo -l VRTSvxvm |grep VERSION |awk '{print $2}'

# pkginfo -l VRTSvxvm |grep VERSION |awk '{print $2}' |awk -F, '{print $1}'

Thursday, July 16, 2009

Tuesday, July 14, 2009

Forwarding AIX Error Messages to a Central Host

System administrators should check AIX's error log daily to look for problems that might cause an outage. The command to check the error log is "errpt" or "smit errpt".

Checking the error logs can be time consuming if you support large number of hosts. Here's a procedure to automatically send error log entries to a central host. The procedure involves creating an ODM entry that immediately runs the "logger" command when any error is logged. The "logger" command sends the error message to the local syslog demon , which forwards it to a central host.

On each AIX host that you want to monitor the error log:

1. Create an ODM entry to run the "logger" command whenever an error is logged.

# vi /tmp/syslog.add
en_persistenceflg = 1
en_method = "logger -pnotice Msg from Error Log: $(errpt -a -l $1 grep -v'ERROR_ID TIMESTAMP')"

2. Add the entry to ODM

# odmadd /tmp/syslog.add

3. Add a syslog entry to forward "notice" priority messages to remote host "centhost"

# vi /etc/syslog.conf
*.notice @centhost

4. Refresh the syslog demon to pick up the new entry
# refresh -s syslogd

On the central host "centhost" where you want to collect error logs:

1. Add a line to the syslog.conf file that saves the messages to a file

# vi /etc/syslog.conf
*.notice /var/central_syslog.txt

2. Create an empty log file (file must exist for syslog to use it).

# touch /var/central_syslog.txt

3. Refresh the syslog demon to pick up the new entry

# refresh -s syslogd

There are multiple variations on forwarding error messages. For example, you can email error notifications. To do so, skip the syslog steps, and change the en_method in the errnotify stanza to

en_method = "errpt -a -l $1 mail -s 'Error Log'"

ZFS and Swap space

Hi there...

Few days back I came across a situation where, I have been told to create the swap device on ZFS filesystem. Basically, being new to ZFS - I was not at all knowing about "how-to do this" hence after some research on internet about it found some good stuffs hence thought of sharing same for you all -

NOTE - To set up a swap area, create a ZFS volume of a specific size and then enable swap on that device. Do not swap to a file on a ZFS file system. A ZFS swap file configuration is not supported.

ZFS volumes are identified as devices in the /dev/zvol/{dsk,rdsk}/path directory.

E.g. - /dev/zvol/dsk/swap-zfs

So here I wanted to add 32 GB of swap space -

# zfs create -V 32g -b 8K zfsnode1-zpool/swap
# zfs set refreservation=32g zfsnode1-zpool/swap
# swap -a /dev/zvol/dsk/zfsnode1-zpool/swap
# swap -l

>>> /etc/vfstab <<<
/dev/zvol/dsk/zfsnode1-zpool/swap    -       -       swap    -       no      -
NOTE: Setting the reservation is important, particularly if you plan on making the change permanent, eg by adding the new zvol as a swap entry in /etc/vfstab. ZFS does not reserve the space for swapping otherwise, so the swap system may think there is space which isn't actually there if you don't do this.

NOTE: The -b option sets the volblocksize to improve swap performance by aligning the volume I/O units on disk to the size of the host architecture memory page size (4 KB on x86 systems and 8KB on SPARC, as reported by the pagesize command.)

It is also possible to grow the existing swap volume. To do so, set a new size and refreservation for the existing volume like this:

# swap -d /dev/zvol/dsk/rpool/swap
# zfs set volsize=2g rpool/swap
# zfs set refreservation=2g rpool/swap
# swap -a /dev/zvol/dsk/rpool/swap

That's it! Hope this helps...


1. Do you know how PVID generated by AIX?

ANS: PVID is combination of the machine’s serial number (from the systems EPROMs) and the date the PVID was generated. This combination ensures the extremely low chance of PVIDs being duplicated.

2. How to change MTU (Maximum Transmission Unit) in AIX?

ANS: chdev -l en1 -a mtu= - by default MTU is always 1500 bytes.


ANS: I tried this however because of poor maths I failed to do it... Little bit maths requires here as we have to somewhere get binary to hex or hex to binary to decimal numbers.

If a file is Deleted from the system, the filesytem blocks composing that file still exist, but are no longer allocated. As long as no newfiles are created or existing files extended within the same filesystem, the blocks will remain untouched. It is possible to reallocate the blocks to the previous file using the "fsdb" command (filesystem debugger).
Steps to recover a deleted file-------------------------------
1) "ls -id {dir}" (where dir is directory where file resided) Record INODE number for next step.
2) Unmount the filesystem.
3) "fsdb /{Mountpoint}" or "fsdb /dev/{LVname}" (where Mountpoint is the filesystem mount point, and LVname is the logical volume name of the filesystem)
4) "{INODE}i" (where INODE is the inode number recorded in step 1) This will display the inode information for the directory. The field a0 contains the block number of the directory. The following steps assume only field a0 is used. If a value appears in a1, etc, it may be necessary to repeat steps #5 and #6 for each block until the file to be recovered is found.
5) "a0b" (moves to block pointed to by field "a0" of this inode)
6) "p128c" (prints 128 bytes of directory in character format) Look for missing filename. If not seen, repeat this step until filename is found. Record address where filename begins. Also record address where PRIOR filename begins. If filename does not appear, return to step #5, and selecting a1b, a2b, etc.
Note that the address of the first field is shown to the far left. Increment the address by one for each position to the right, counting in octal.
7) "a0b" (moves to block pointed to by field "a0" of this inode) If the filename was found in block 1, use a1b instead, etc.
8) "p128e" (prints first 128 bytes in decimal word format) Find the address of the file to recover (as recorded in step 6) in the far left column. If address is not shown, repeat until found.
9) Record the address of the file which appeared immediately PRIOR to the file you want to recover.
10) Find the ADDRESS of the record LENGTH field for the file in step #9 assuming the following format:
{ADDRESS}: x x x x x x x x x x ... -------- filename ------ inode # --+----+ +-- filename length record LENGTH --+
Note that the inode number may begin at any position on the line. Note also that each number represents two bytes, so the address of the LENGTH field will be `{ADDRESS} + (#hops * 2) + 1'
11) Starting with the first word of the inode number, count in OCTAL until you reach the inode number of the file to be restored, assuming each word is 2 bytes.
12) "0{ADDRESS}B={BYTES}" (where ADDRESS is the address of the record LENGTH field found in step #10, and BYTES is the number of bytes [octal] counted in step #11)
13) If the value found in the LENGTH field in step #10 is greater than 255, also type the following:
"0{ADDRESS-1}B=0" (where ADDRESS-1 is one less than the ADDRESS recorded in step #10) This is necessary to clear out the first byte of the word.
14) "q" (quit fsdb)
15) "fsck {Mountpoint}" or "fsck /dev/{LVname}" This command will return errors for each recovered file asking if you wish to REMOVE the file. Answer "n" to all questions. For each file that is listed, record the associated INODE number.
16) "fsdb /{Mountpoint}" or "fsdb /dev/{LVname}"
17) {BLOCK}i.ln=1 (where BLOCK is the block number recoded in step #15) This will change the link count for the inode associated with the recovered file. Repeat this step for each file listed in step #15.
18) "q" (quit fsdb)
19) "fsck {Mountpoint}" or "fsck /dev/{LVname}" The REMOVE prompts should no longer appear. Answer "y" to all questions pertaining to fixing the block map, inode map, and/or superblock.

If someone get success upon this procedure please do let me know - As all of us how difficult it is to recover deleted files on UNIX!!!

Simple, very basic however useful for beginners. - Zone creation with LOFS

bash-3.00# zonecfg -z New_Prod_Test
New_Prod_Test: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:New_Prod_Test> create
zonecfg:New_Prod_Test> set zonepath=/Zones/NEW_PROD_TEST
zonecfg:New_Prod_Test> set autoboot=true
zonecfg:New_Prod_Test> add net
zonecfg:New_Prod_Test:net> set address=
zonecfg:New_Prod_Test:net> set physical=e1000g0
zonecfg:New_Prod_Test:net> end
zonecfg:New_Prod_Test> add fs
zonecfg:New_Prod_Test:fs> set dir=/usr/local
zonecfg:New_Prod_Test:fs> set special=/Zones/NEW_PROD_TEST/local
zonecfg:New_Prod_Test:fs> set type=lofs
zonecfg:New_Prod_Test:fs> set options=[rw,nodevices]
zonecfg:New_Prod_Test:fs> end
zonecfg:New_Prod_Test> verify
zonecfg:New_Prod_Test> commit
zonecfg:New_Prod_Test> exit

ZFS - most frequently used commands.

ZFS - most frequently used commands - below are the commands that we use on daily basis for ZFS filesystem Administration.

# Some reminders on command syntax
root@server:# zpool create oradb-1 c0d0s0
root@server:# zpool create oradb-1 mirror c0d0s3 c1d0s0
root@server:# zfs create oradb-1/oracle-10g
root@server:# zfs set mountpoint=/u01/oracle-10g oradb-1/oracle-10g
root@server:# zfs create oradb-1/home
root@server:# zfs set mountpoint=/export/home oradb-1/home
root@server:# zfs create oradb-1/home/oracle
root@server:# zfs set compression=on oradb-1/home
root@server:# zfs set quota=1g oradb-1/home/oracle
root@server:# zfs set reservation=2g oradb-1/home/oracle
root@server:# zfs set sharenfs=rw oradb-1/home

# to set the filesystem block size to 16k
root@server:# zfs set recordsize=16k oradb-1/home

zpool list, zpool status, zfs list are also useful command in case of FS monitoring.

I have lots of things to add to this however as and when I will get hands on I will modify or add to my blog.