Adding LUN/External disk to HP-UX, Expand VG and LV.
#ioscan -fnCdisk > /tmp/ioscan1.txt
#insf -eCdisk
#ioscan -fnCdisk > /tmp/ioscan2.txt
#diff /tmp/ioscan1.txt /tmp/ioscan2.txt
#ioscan -fnC disk
Now execute SAM and check for "unused" hardware path, you will see something like below -
Hardware Path Number of Paths Use Volume Group Total MB DES
1/10/0/0.115.10.19.98.1.3 2 Unused -- 8192 IBM
# diskinfo /dev/rdsk/c33t1d3
SCSI describe of /dev/rdsk/c33t1d3:
vendor: IBM
product id: 2107900
type: direct access
size: 8388608 Kbytes
bytes per sector: 512
# vgdisplay -v vg01
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 10
Open LV 10
Max PV 32
Cur PV 4
Act PV 4
Max PE per PV 4160
VGDA 8
PE Size (Mbytes) 32
Total PE 1020
Alloc PE 941
Free PE 79
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
..... Logical Volumes Details
--- Physical volumes ---
....... Physical Volumes Details
Okay, now we have just added new LUN/Disk to system so next task is to - Add disk to VG
1. Open SAM -->
2. Disks and File Systems -->
3. Volume Groups -->
4. Arrow down to the volume group you want to extend (from bdf) and hit the space bar to select it
-->
5. Tab once to get to the menu at the top and then arrow over to "Actions" and hit enter -->
6. Select "Extend" from the Actions menu -->
7. Select "Select Disk(s)..." and hit enter
-->
8. Select the appropriate disk to add with the space bar and select OK --> Select OK which will expand the volume -->
9. Exit SAM
If you dont like to use SAM then -
#pvcreate -f /dev/rdsk/c0t4d0
#vgextend vg01 /dev/dsk/c0t4d0
Verify with # vgdisplay -v vg01
Here we go, new disk added to exsiting VG so next task is to extend LV.
NOTE: YOU CAN INCREASE FS ON THE FLY PROVIDED THAT YOU HAVE ON-LINE JFS SUPPORT. TO CHECK IT EXECUTE # swlist -l product '*JFS'
# bdf /oradata
Filesystem kbytes used avail %used Mounted on
/dev/vg01/lvol3 6815744 6377253 411091 94% /oradata
current kbytes / 1024 = current MB
i.e. - 6815744 / 1024 = 6656 MB
Here is a trick - add current MB to MB adding to get the new size in MB, So here I want to extend LV by 5G i.e. 5120MB. so here I need to pass 11776M
# lvextend -L 11776 /dev/vg01/lvol3
# fsadm -F vxfs -b 11776M /oradata
# bdf /oradata
Filesystem kbytes used avail %used Mounted on
/dev/vg01/lvol3 12058624 6378538 5325086 55% /oradata
Here is a small tip on getting HP Fiber Card Info -
#print_manifest | grep -i Fibre
fc 8/8/1/0 td HP Tachyon TL/TS Fibre Channel Mass Storage Adapter
fc 8/12/1/0 td HP Tachyon TL/TS Fibre Channel Mass Storage Adapter
FibrChanl-00 B.11.11.09 PCI/HSC FibreChannel;Supptd HW=A6684A,A6685A,A5158A,A6795A
Hello Friends, This is Nilesh Joshi from Pune, India. By profession I am an UNIX Systems Administrator and have proven career track on UNIX Systems Administration. This blog is written from both my research and my experience. The methods I describe herein are those that I have used and that have worked for me. It is highly recommended that you do further research on this subject. If you choose to use this document as a guide, you do so at your own risk. I wish you great success.
Find it
Monday, August 31, 2009
Tuesday, August 25, 2009
Sun Solaris MPxIO
In general, multipathing is a method for redundancy and automatic fail-over that provides at least two physical paths to a target resource. Multipathing allows for re-routing in the event of component failure, enabling higher availability for storage resources. Multipathing also allows for the parallel routing of data, which can result in faster throughput and increased scalability.
The Solaris I/O multipathing feature is a multipathing solution for storage devices that is part of the Solaris operating environment. This feature was formerly known as Sun StorEdge Traffic Manager (STMS) or MPxIO.
Solaris Fibre Channel and Storage Multipathing software enables FC connectivity for the Solaris hosts. The software resides on the server and identifies the storage and switch devices on your SAN. It allows you to attach either loop or fabric SAN storage devices while providing a standard interface with which to manage them.
Multipathing is disabled by default for FC devices on SPARC based systems, but is enabled by default on x86 based systems.
Note - The multipathing feature is not available for parallel SCSI devices but is available for FC disk devices. Multipathing is not supported on tape drives or libraries or on IP over FC.
Example device name with multipath disabled:
/dev/dsk/c1t1d0s0
Example device name with multipath enabled:
/dev/dsk/c3t2000002037CD9F72d0s0
Well, We have learn about theory enough, now we will see how to enable it?
Enabling MPxIO -
MPxIO has a configuration file located @ /kernel/drv/fp.conf. /kernel/drv/fp.conf file is used to enable MPxIO and if needed exclude the interal disks from MPxIO
stmsboot command is also used to enables/disables/updates MPxIO configuration.
Enable MPxIO in /kernel/drv/fp.conf
1. Edit /kernel/drv/fp.conf file and have below entry uncommented.
mpxio-disable="no";
2. After editing fp.conf and having above entry in it active execute below command.
#stmsboot -u <<<<< Caution: It ask for reboot so for MPxIO enablement server needs downtime.
OK. So now after reboot you will have to verify if MPxIO is running or not.
I follow very immature way to check this out. I just do the format and see if I can see the external disks or not.
# format
Searching for disks...done
c2t60050768018A8023B80000000000013Ad0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Bd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Cd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Dd0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Ed0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Fd0: configured with capacity of 16.00GB
There are various commands by which you can manage your storage disks few of them are as listed -
1. To Display Paths
# mpathadm list lu
/dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
Total Path Count: 8
Operational Path Count: 8
2. Show detailed information about a disk/LUN
#mpathadm show lu /dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
<<<<<<< You can find more details of specific LUN >>>>>>>
3. Display world wide port names/Fiber card Firmware level -
# fcinfo hba-port
HBA Port WWN: 10000000c9446e11 <----------- WWPN
OS Device Name: /dev/cfg/c4
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.90a7 (C2D3.90A7)
FCode/BIOS Version: Boot:3.20 Fcode:1.40a0
Serial Number: BG50103047
Driver Name: emlxs
Driver Version: 2.31p (2008.12.11.10.30)
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 1Gb
Node WWN: 20000000c9446e11
#fcinfo hba-port -l <<<<< Good Coomand for debugging >>>>>>
#fcinfo remote-port -sl -p 10000000c9446e11
lists all remote ports as well as the link statistics and scsi-target information. This is one of the excellent command I found for troubleshooting/debugging.
4. Display all LUNs visable to MPxIO
# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Ad0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Bd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Cd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Dd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Ed0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
# luxadm -e port
/devices/pci@1e,600000/lpfc@2/fp@0,0:devctl CONNECTED
>>>>> shows all hba ports and which ones are connected. >>>>>>>
# cfgadm -al -o show_FCP_dev
Enter the cfgadm command to verify the paths to the LUNs.
There are several other utilities by which you can manage/view your storage disk, above are few well known methods. I hope someone will get benefited by this article.
=================================================================================
Find HBA’s WWN
#prtpicl -v -c scsi-fcp | grep wwn (On all versions)
# prtconf -vp | grep -i ww
One small shell script I found handy -
#!/bin/sh
for i in `cfgadm | grep fc-fabric | awk ‘{print $1}’`; do
dev=”`cfgadm -lv $i | grep devices | awk ‘{print $NF}’`”
wwn=”`luxadm -e dump_map $dev | grep ‘Host Bus’ | awk ‘{print $4}’`”
echo “$i: $wwn”
done
The Solaris I/O multipathing feature is a multipathing solution for storage devices that is part of the Solaris operating environment. This feature was formerly known as Sun StorEdge Traffic Manager (STMS) or MPxIO.
Solaris Fibre Channel and Storage Multipathing software enables FC connectivity for the Solaris hosts. The software resides on the server and identifies the storage and switch devices on your SAN. It allows you to attach either loop or fabric SAN storage devices while providing a standard interface with which to manage them.
Multipathing is disabled by default for FC devices on SPARC based systems, but is enabled by default on x86 based systems.
Note - The multipathing feature is not available for parallel SCSI devices but is available for FC disk devices. Multipathing is not supported on tape drives or libraries or on IP over FC.
Example device name with multipath disabled:
/dev/dsk/c1t1d0s0
Example device name with multipath enabled:
/dev/dsk/c3t2000002037CD9F72d0s0
Well, We have learn about theory enough, now we will see how to enable it?
Enabling MPxIO -
MPxIO has a configuration file located @ /kernel/drv/fp.conf. /kernel/drv/fp.conf file is used to enable MPxIO and if needed exclude the interal disks from MPxIO
stmsboot command is also used to enables/disables/updates MPxIO configuration.
Enable MPxIO in /kernel/drv/fp.conf
1. Edit /kernel/drv/fp.conf file and have below entry uncommented.
mpxio-disable="no";
2. After editing fp.conf and having above entry in it active execute below command.
#stmsboot -u <<<<< Caution: It ask for reboot so for MPxIO enablement server needs downtime.
OK. So now after reboot you will have to verify if MPxIO is running or not.
I follow very immature way to check this out. I just do the format and see if I can see the external disks or not.
# format
Searching for disks...done
c2t60050768018A8023B80000000000013Ad0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Bd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Cd0: configured with capacity of 12.00GB
c2t60050768018A8023B80000000000013Dd0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Ed0: configured with capacity of 16.00GB
c2t60050768018A8023B80000000000013Fd0: configured with capacity of 16.00GB
There are various commands by which you can manage your storage disks few of them are as listed -
1. To Display Paths
# mpathadm list lu
/dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
Total Path Count: 8
Operational Path Count: 8
2. Show detailed information about a disk/LUN
#mpathadm show lu /dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
<<<<<<< You can find more details of specific LUN >>>>>>>
3. Display world wide port names/Fiber card Firmware level -
# fcinfo hba-port
HBA Port WWN: 10000000c9446e11 <----------- WWPN
OS Device Name: /dev/cfg/c4
Manufacturer: Emulex
Model: LP9002L
Firmware Version: 3.90a7 (C2D3.90A7)
FCode/BIOS Version: Boot:3.20 Fcode:1.40a0
Serial Number: BG50103047
Driver Name: emlxs
Driver Version: 2.31p (2008.12.11.10.30)
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 1Gb
Node WWN: 20000000c9446e11
#fcinfo hba-port -l <<<<< Good Coomand for debugging >>>>>>
#fcinfo remote-port -sl -p 10000000c9446e11
lists all remote ports as well as the link statistics and scsi-target information. This is one of the excellent command I found for troubleshooting/debugging.
4. Display all LUNs visable to MPxIO
# luxadm probe
No Network Array enclosures found in /dev/es
Found Fibre Channel device(s):
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Ad0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Bd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Cd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Dd0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Ed0s2
Node WWN:5005076801000477 Device Type:Disk device
Logical Path:/dev/rdsk/c2t60050768018A8023B80000000000013Fd0s2
# luxadm -e port
/devices/pci@1e,600000/lpfc@2/fp@0,0:devctl CONNECTED
>>>>> shows all hba ports and which ones are connected. >>>>>>>
# cfgadm -al -o show_FCP_dev
Enter the cfgadm command to verify the paths to the LUNs.
There are several other utilities by which you can manage/view your storage disk, above are few well known methods. I hope someone will get benefited by this article.
=================================================================================
Find HBA’s WWN
#prtpicl -v -c scsi-fcp | grep wwn (On all versions)
# prtconf -vp | grep -i ww
One small shell script I found handy -
#!/bin/sh
for i in `cfgadm | grep fc-fabric | awk ‘{print $1}’`; do
dev=”`cfgadm -lv $i | grep devices | awk ‘{print $NF}’`”
wwn=”`luxadm -e dump_map $dev | grep ‘Host Bus’ | awk ‘{print $4}’`”
echo “$i: $wwn”
done
Monday, August 24, 2009
Serial Number in EEPROM - Sun Solaris
sneep (Serial Number in EEPROM) is a nice utility for Solaris that can retreive the Chasis Serial Number (CSN) or the Product Serial Number (PSN). This comes real handy when taking inventory or when having to work with Sun Support. sneep can also store useful information like system Assett Tag or Location into the EEPROM which can be retreived later on.
NOTE:
sneep command is to inquire about the system serial number.
For newer SUN models, this command reads the eeprom to figure out the system serial number.
For other, older models, you have to load SNEEP once, with the box serial number, and
from that point on, programs and users can call up the system serial number by executing
sneep at the command line. SNEEP also loads the eeprom with the serial number.
This tool is part of the Support Toolkit Bundle, which you install with install_stb.sh, a file you can download from several SUN support sites. This file also downloads and installs
the SUN explorer utility, and several others, so it's critical for getting SUN support.
You are also supposed to install the utilities on the Supplemental Software CD. These
give you the SUNWvts testing utilities, the RSC admin utilities, and install other support
utilities that WILL be asked for on a SUN contract support call.
Download the sneep utility from http://www.sun.com/sneep
Install sneep
# uncompress SUNWsneep2.7.tar.Z
# tar -xvf SUNWsneep2.7.tar
# pkgadd -d . SUNWsneep
# pkginfo -l SUNWsneep
PKGINST: SUNWsneep
NAME: Serial Number in EEPROM
CATEGORY: service
ARCH: sparc,i386
VERSION: 2.7
BASEDIR: /opt/SUNWsneep
VENDOR: Sun Microsystems, Inc.
DESC: Persistent, software-accesible storage of Chassis Serial Number (CSN) across OS and application changes. Works on all Sun platforms. Can also store and retrieve arbitrary other values in EEPROM.
PSTAMP: sustain-1920090626002544
INSTDATE: Aug 24 2009 07:09
HOTLINE: Support provided through normal Sun support channels
EMAIL: sneep-support@sun.com
STATUS: completely installed
FILES: 25 installed pathnames
5 directories
3 executables
1370 blocks used (approx)
How to use it?
Display Serial Number
#sneep
XXXXXXXXXX
# sneep -T
“ChassisSerialNumber” “XXXXXXXX"
Store Information in EEPROM
To store information into the EEPROM like the asset tag use the “-t” to set the tag name and the “-s” option to set its value as below:
# sneep -t “AssetTag” -s 001234
To display all information
# sneep -T
“AssetTag” “001234"
“ChassisSerialNumber” “XXXXXXXX"
NOTE:
sneep command is to inquire about the system serial number.
For newer SUN models, this command reads the eeprom to figure out the system serial number.
For other, older models, you have to load SNEEP once, with the box serial number, and
from that point on, programs and users can call up the system serial number by executing
sneep at the command line. SNEEP also loads the eeprom with the serial number.
This tool is part of the Support Toolkit Bundle, which you install with install_stb.sh, a file you can download from several SUN support sites. This file also downloads and installs
the SUN explorer utility, and several others, so it's critical for getting SUN support.
You are also supposed to install the utilities on the Supplemental Software CD. These
give you the SUNWvts testing utilities, the RSC admin utilities, and install other support
utilities that WILL be asked for on a SUN contract support call.
Download the sneep utility from http://www.sun.com/sneep
Install sneep
# uncompress SUNWsneep2.7.tar.Z
# tar -xvf SUNWsneep2.7.tar
# pkgadd -d . SUNWsneep
# pkginfo -l SUNWsneep
PKGINST: SUNWsneep
NAME: Serial Number in EEPROM
CATEGORY: service
ARCH: sparc,i386
VERSION: 2.7
BASEDIR: /opt/SUNWsneep
VENDOR: Sun Microsystems, Inc.
DESC: Persistent, software-accesible storage of Chassis Serial Number (CSN) across OS and application changes. Works on all Sun platforms. Can also store and retrieve arbitrary other values in EEPROM.
PSTAMP: sustain-1920090626002544
INSTDATE: Aug 24 2009 07:09
HOTLINE: Support provided through normal Sun support channels
EMAIL: sneep-support@sun.com
STATUS: completely installed
FILES: 25 installed pathnames
5 directories
3 executables
1370 blocks used (approx)
How to use it?
Display Serial Number
#sneep
XXXXXXXXXX
# sneep -T
“ChassisSerialNumber” “XXXXXXXX"
Store Information in EEPROM
To store information into the EEPROM like the asset tag use the “-t” to set the tag name and the “-s” option to set its value as below:
# sneep -t “AssetTag” -s 001234
To display all information
# sneep -T
“AssetTag” “001234"
“ChassisSerialNumber” “XXXXXXXX"
Friday, August 21, 2009
SCSI transport failed: reason 'timeout': retrying command
Error captured from dmesg and /var/adm/messages:
#tail /var/adm/messages
Aug 20 01:17:53 XXXXX scsi: [ID 107833 kern.warning] WARNING: /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf6f64ca,0 (ssd0):
Aug 20 01:17:53 XXXXX SCSI transport failed: reason 'timeout': retrying command
Aug 20 01:17:53 XXXXX md_stripe: [ID 641072 kern.warning] WARNING: md: d23: write error on /dev/dsk/c1t0d0s3
Aug 20 01:17:53 XXXXX scsi: [ID 107833 kern.warning] WARNING: /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cf6f64ca,0 (ssd0):
Aug 20 01:17:53 XXXXX SCSI transport failed: reason 'timeout': giving up
Aug 20 01:17:53 XXXXX md_stripe: [ID 641072 kern.warning] WARNING: md: d20: write error on /dev/dsk/c1t0d0s0
Aug 20 01:17:53 XXXXX md_mirror: [ID 104909 kern.warning] WARNING: md: d23: /dev/dsk/c1t0d0s3 needs maintenance
Aug 20 01:17:53 XXXXX md_mirror: [ID 104909 kern.warning] WARNING: md: d20: /dev/dsk/c1t0d0s0 needs maintenance
Checked performed:
#iostat -En
c1t0d0 Soft Errors: 13 Hard Errors: 0 Transport Errors: 4 <<< No hard errors.
# metastat | more
d0: Mirror
Submirror 0: d20
State: Needs maintenance <<< The meta device is in “Needs Maintenance” state for d0 & d3 that is / and /var
Submirror 1: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16779312 blocks (8.0 GB)
d20: Submirror of d0
State: Needs maintenance
Invoke: metareplace d0 c1t0d0s0
Size: 16779312 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Maintenance Yes
d10: Submirror of d0
State: Okay
Size: 16779312 blocks (8.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes
d3: Mirror
Submirror 0: d23
State: Needs maintenance
Submirror 1: d13
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 12584484 blocks (6.0 GB)
d23: Submirror of d3
State: Needs maintenance
Invoke: metareplace d3 c1t0d0s3
Size: 12584484 blocks (6.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s3 0 No Maintenance Yes
# metadb -i
flags first blk block count
a m p luo 16 1034 /dev/dsk/c1t1d0s7
a p luo 1050 1034 /dev/dsk/c1t1d0s7
a p luo 2084 1034 /dev/dsk/c1t1d0s7
a p luo 16 1034 /dev/dsk/c1t0d0s7
a p luo 1050 1034 /dev/dsk/c1t0d0s7
a p luo 2084 1034 /dev/dsk/c1t0d0s7
<<<< Metadb replicas seems to in well shape >>>>>
Action taken to address the issue:
# format c1t0d0
format> analyze
analyze> read
Ready to analyze (won't harm SunOS). This takes a long time,
but is interruptable with CTRL-C. Continue? yes
pass 0
24619/26/53
pass 1
24619/26/53
Total of 0 defective blocks repaired.
#metasync d0 ; metasync d3
# metareplace -e d0 c1t0d0s0
d0: device c1t0d0s0 is enabled
# metareplace -e d3 c1t0d0s3
d3: device c1t0d0s3 is enabled
Where option "e" stands for - Transitions the state of component to the available state and resyncs the failed component.
# metastat | grep %
Resync in progress: 72 % done
Resync in progress: 72 % done
After sync completes check metastat output, everything should be fine.
NOTE:
The error was “SCSI transport failed: reason 'timeout': retrying command” and according to me the root cause is - when the disk tried to send or receive data to the drive it could not. This could be cable or SCSI controller. I would get the data off and try a new drive be fore it fails.
Tuesday, August 18, 2009
How to tell if server is global or non-global zone/How to Tell If the Solaris Zone is a Whole Root or Sparse Zone
On
day to day basis many times wee need to know if the server we are
working on is GZ or NGZ or may be sometimes for performing some
automation related to such condition, pkgcond command can be very much
helpful in such cases.
# pkgcond is_nonglobal_zone
# echo $?
1
# pkgcond is_global_zone
# echo $?
0
Where 1 is false and 0 is true. So here pkgcond is_nonglobal_zone output is 1 it means it is a global zone!
-----------------------------------------------------------------------------------
# pkgcond
no condition to check specified; usage is:
pkgcond [-nv] Sunday, August 16, 2009
NIM Cheat Sheet
Here is the cheat sheet for AIX NIM procedures.
Packages required -
bos.sysmgt.nim.master
bos.sysmgt.nim.spot
bos.sysmgt.nim.client (For client)
1. To configure and initializing NIM master.
#nimconfig -a netname=net_10_77_64 -a pif_name=en0 -a netboot_kernel=mp -a cable_type=tp -a client_reg=no
2. To define LPP source on NIM master
#nim -o define -t lpp_source -a server=master -a location=/export/nim/lpp_source/lpp5307 -a source=/mnt lpp5307
Where -
o - Specifies what operation to perform on a NIM object.
t - Specifies the type of the NIM object for define operations.
a - Assigns the specified value to the specified attribute.
--> To view the Attributes of a newly created lpp_source
#lsnim -l lpp5307
--> To remove lpp_source
#nim -o remove lpp5307
--> To check the lpp_source integrity after any additions or modifications to existing lpp_source
#nim -Fo check lpp5307
Where -
F- Overrides some safety checks.
3. To define SPOT (Shared product object tree) from lpp_source
#nim -o define -t spot -a server=master -a location=/export/nim/spot/ -a source=lpp5307 -a installp_flags=-aQg spot5307
--> To recreate the SPOT defination
#/usr/lpp/bos.sysmgt/nim/methods/m_mkspot -o -a server=master -a location=/export/nim/spot -a source=no spot5307
--> To remove NIM SPOT
#nim -o remove spot5307
--> To check the SOPT
#nim -o check spot5300
4. To define a NIM client
#nim -o define -t standalone -a platform=chrp -a if1="net_10_77_64 lpar2 0 ent0" -a cable_type1=tp -a netboot_kernel=mp LPAR2
--> To view the attributes of the NIM client
#lsnim -l LPAR2
--> To remove the NIM client
#nim -o remove LPAR2
5. To install NIM client
#nim -o allocate -a spot=spot5307 -a lpp_source=lpp5307 LPAR2
6. initiate the install for NIM client
#nim -o bos_inst -a source=rte -a installp_flags=agX -a accept_licenses=yes LPAR2
NOTE - If the installation is unsuccessful, you need to re-allocate the resources. However, first you will need to reset and deallocate NIM resources.
#nim -Fo reset LPAR2
#nim -Fo deallocate -a subclass=all LPAR2
NOTE - To view the progress during installation and first boot, you can use the showlog operation to the nim command:
#nim -o showlog -a log_type=boot LPAR2
To unconfigure NIM master
#nim -o unconfig master
Packages required -
bos.sysmgt.nim.master
bos.sysmgt.nim.spot
bos.sysmgt.nim.client (For client)
1. To configure and initializing NIM master.
#nimconfig -a netname=net_10_77_64 -a pif_name=en0 -a netboot_kernel=mp -a cable_type=tp -a client_reg=no
2. To define LPP source on NIM master
#nim -o define -t lpp_source -a server=master -a location=/export/nim/lpp_source/lpp5307 -a source=/mnt lpp5307
Where -
o - Specifies what operation to perform on a NIM object.
t - Specifies the type of the NIM object for define operations.
a - Assigns the specified value to the specified attribute.
--> To view the Attributes of a newly created lpp_source
#lsnim -l lpp5307
--> To remove lpp_source
#nim -o remove lpp5307
--> To check the lpp_source integrity after any additions or modifications to existing lpp_source
#nim -Fo check lpp5307
Where -
F- Overrides some safety checks.
3. To define SPOT (Shared product object tree) from lpp_source
#nim -o define -t spot -a server=master -a location=/export/nim/spot/ -a source=lpp5307 -a installp_flags=-aQg spot5307
--> To recreate the SPOT defination
#/usr/lpp/bos.sysmgt/nim/methods/m_mkspot -o -a server=master -a location=/export/nim/spot -a source=no spot5307
--> To remove NIM SPOT
#nim -o remove spot5307
--> To check the SOPT
#nim -o check spot5300
4. To define a NIM client
#nim -o define -t standalone -a platform=chrp -a if1="net_10_77_64 lpar2 0 ent0" -a cable_type1=tp -a netboot_kernel=mp LPAR2
--> To view the attributes of the NIM client
#lsnim -l LPAR2
--> To remove the NIM client
#nim -o remove LPAR2
5. To install NIM client
#nim -o allocate -a spot=spot5307 -a lpp_source=lpp5307 LPAR2
6. initiate the install for NIM client
#nim -o bos_inst -a source=rte -a installp_flags=agX -a accept_licenses=yes LPAR2
NOTE - If the installation is unsuccessful, you need to re-allocate the resources. However, first you will need to reset and deallocate NIM resources.
#nim -Fo reset LPAR2
#nim -Fo deallocate -a subclass=all LPAR2
NOTE - To view the progress during installation and first boot, you can use the showlog operation to the nim command:
#nim -o showlog -a log_type=boot LPAR2
To unconfigure NIM master
#nim -o unconfig master
Keeping Processor Busy - AIX
Keeping a processor busy
There are times that you would like to create some "load" on the system. A very, very easy way of keeping a processor very busy is:
#yes > /dev/null
The "yes" command will continuously echo "yes" to /dev/null. This is a single-threaded process, so it will put load on a single processor. If you wish to put load on multiple processors, why not run yes a couple of times?
Hope it make scene if some one has better way to put load on system do let me know...
There are times that you would like to create some "load" on the system. A very, very easy way of keeping a processor very busy is:
#yes > /dev/null
The "yes" command will continuously echo "yes" to /dev/null. This is a single-threaded process, so it will put load on a single processor. If you wish to put load on multiple processors, why not run yes a couple of times?
Hope it make scene if some one has better way to put load on system do let me know...
Tuesday, August 11, 2009
Adding External Disk to Sun Solaris.
As a part of system administration activities many times we deal with adding External Disk to server.
In my current environment we basically use three types of storage driver utilities - IBM SDD, Clariion and for Solaris 10 we use MPxIO along with IBMsdd.
After storage provides us a LUN(s) we need to configure that LUN to make it usable.
This is the in general procedure to configure LUN/External disk on Solaris.
1. for Clariion -
#devfsadm -v -C -c disk
#/etc/powermt config
#format
format command should show the newly added disk.
2. for IBM -
#devfsadm -v -C -c disk
#/opt/IBMsdd/bin/cfgvpath -r
#/opt/IBMsdd/bin/vpathmkdev
#format
format command should show the newly added disk.
3. MPxIO -
MPxIO has auto configuration capability. Disk should automatically visible when you do format or mpathadm list lu or inq (Inquiry utility by EMC) etc..
To make disk useable first label it and perform partitioning on the respective disk slice(s).
Now you are set to create a new file system!
In my current environment we basically use three types of storage driver utilities - IBM SDD, Clariion and for Solaris 10 we use MPxIO along with IBMsdd.
After storage provides us a LUN(s) we need to configure that LUN to make it usable.
This is the in general procedure to configure LUN/External disk on Solaris.
1. for Clariion -
#devfsadm -v -C -c disk
#/etc/powermt config
#format
format command should show the newly added disk.
2. for IBM -
#devfsadm -v -C -c disk
#/opt/IBMsdd/bin/cfgvpath -r
#/opt/IBMsdd/bin/vpathmkdev
#format
format command should show the newly added disk.
3. MPxIO -
MPxIO has auto configuration capability. Disk should automatically visible when you do format or mpathadm list lu or inq (Inquiry utility by EMC) etc..
To make disk useable first label it and perform partitioning on the respective disk slice(s).
Now you are set to create a new file system!
Friday, August 7, 2009
Renaming zpool.
Few days back I came to know on "How to rename zpool" One of my UNIX Guru Alex educated me on this.
Alex I would like to say million of thanks to you for your valuable guidance.
All right, so here I am creating a zpool named “oracle-db-zp00”
# zpool create oracle-db-zp00 c8t60050768018A8023B800000000000013d0
# zpool status -v
pool: oracle-db-zp00
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
oracle-db-zp00 ONLINE 0 0 0
c8t60050768018A8023B800000000000013d0 ONLINE 0 0 0
errors: No known data errors
Here I messed up with zpool name; it should be “Oracle-Apps-zp00” as per standards. To fix this mistake, I first exported the pool
# zpool export oracle-db-zp00
What zpool export does?
Export a pool from the system for importing on another system. Now at this step “zpool status –v” should not give any O/P for “oracle-db-zp00” zpool.
Now I am importing it with correct name –
# zpool import oracle-db-zp00 Oracle-Apps-zp00
# zpool status -v
pool: Oracle-Apps-zp00
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Oracle-Apps-zp00 ONLINE 0 0 0
c8t60050768018A8023B800000000000013d0 ONLINE 0 0 0
errors: No known data errors
Cool... Isn’t it easy?
Alex I would like to say million of thanks to you for your valuable guidance.
All right, so here I am creating a zpool named “oracle-db-zp00”
# zpool create oracle-db-zp00 c8t60050768018A8023B800000000000013d0
# zpool status -v
pool: oracle-db-zp00
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
oracle-db-zp00 ONLINE 0 0 0
c8t60050768018A8023B800000000000013d0 ONLINE 0 0 0
errors: No known data errors
Here I messed up with zpool name; it should be “Oracle-Apps-zp00” as per standards. To fix this mistake, I first exported the pool
# zpool export oracle-db-zp00
What zpool export does?
Export a pool from the system for importing on another system. Now at this step “zpool status –v” should not give any O/P for “oracle-db-zp00” zpool.
Now I am importing it with correct name –
# zpool import oracle-db-zp00 Oracle-Apps-zp00
# zpool status -v
pool: Oracle-Apps-zp00
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Oracle-Apps-zp00 ONLINE 0 0 0
c8t60050768018A8023B800000000000013d0 ONLINE 0 0 0
errors: No known data errors
Cool... Isn’t it easy?
EFI ( Extensible Firmware Interface) disk label & SMI (Sun Microsystems Inc) or VTOC disk label.
I came across a problem with reusing ZFS disks for UFS. I couldn't
format the disk properly, but eventually got the answer!
The EFI disk label provides support for physical disks and virtual disk volumes. This release also includes updated disk utilities for managing disks greater than 1 terabyte. The UFS file system is compatible with the EFI disk label, and you can create a UFS file system greater than 1 terabyte.
There is another unbundled Sun StorEdge QFS Shared File System is also available if you need to create file systems greater than 1 terabyte. (I never worked on this FS)
NOTE: Some important facts of EFI -
1. The size of the EFI label is usually 34 sectors, so partitions start at sector 34This feature means no partition can start at sector zero (0).
2. No cylinder, head, or sector information is stored in the label. Sizes are reported in blocks.
3. Information that was stored in the alternate cylinders area, the last two cylinders of the disk, is now stored in slice 8.
4. The EFI disk label is not supported on IDE disks.
All right, now we have at leat got some idea what is EFI disk label and what are advantages and disadvantages. Now let us see how we can covert EFI disk label to SMI disk label.
So basically you see disk layout like as below -
#format -e c8t60050768018A8023B800000000000013d0
partition> p
Current partition table (original):
Total disk sectors available: 8821342 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 256 4.21GB 8821342
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 8821343 8.00MB 8837726
Look at this there is something call slice 8 which is pretty new for few guys right...
So here we are converting EFI to SMI so that we can use it for UFS.
partition> l
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Continue? yes
Auto configuration via format.dat[no]?
Auto configuration via generic SCSI-2[no]?
partition> p
Current partition table (default):
Total disk cylinders available: 4313 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 127 128.00MB (128/0/0) 262144
1 swap wu 128 - 255 128.00MB (128/0/0) 262144
2 backup wu 0 - 4312 4.21GB (4313/0/0) 8833024
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 256 - 4312 3.96GB (4057/0/0) 8308736
7 unassigned wm 0 0 (0/0/0) 0
See.. now I can see normal SMI label disk layout.
This is the process to convert EFI label to SMI label. Hope it will help!
format the disk properly, but eventually got the answer!
The EFI disk label provides support for physical disks and virtual disk volumes. This release also includes updated disk utilities for managing disks greater than 1 terabyte. The UFS file system is compatible with the EFI disk label, and you can create a UFS file system greater than 1 terabyte.
There is another unbundled Sun StorEdge QFS Shared File System is also available if you need to create file systems greater than 1 terabyte. (I never worked on this FS)
NOTE: Some important facts of EFI -
1. The size of the EFI label is usually 34 sectors, so partitions start at sector 34This feature means no partition can start at sector zero (0).
2. No cylinder, head, or sector information is stored in the label. Sizes are reported in blocks.
3. Information that was stored in the alternate cylinders area, the last two cylinders of the disk, is now stored in slice 8.
4. The EFI disk label is not supported on IDE disks.
All right, now we have at leat got some idea what is EFI disk label and what are advantages and disadvantages. Now let us see how we can covert EFI disk label to SMI disk label.
So basically you see disk layout like as below -
#format -e c8t60050768018A8023B800000000000013d0
partition> p
Current partition table (original):
Total disk sectors available: 8821342 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 256 4.21GB 8821342
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 8821343 8.00MB 8837726
Look at this there is something call slice 8 which is pretty new for few guys right...
So here we are converting EFI to SMI so that we can use it for UFS.
partition> l
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Continue? yes
Auto configuration via format.dat[no]?
Auto configuration via generic SCSI-2[no]?
partition> p
Current partition table (default):
Total disk cylinders available: 4313 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 127 128.00MB (128/0/0) 262144
1 swap wu 128 - 255 128.00MB (128/0/0) 262144
2 backup wu 0 - 4312 4.21GB (4313/0/0) 8833024
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 256 - 4312 3.96GB (4057/0/0) 8308736
7 unassigned wm 0 0 (0/0/0) 0
See.. now I can see normal SMI label disk layout.
This is the process to convert EFI label to SMI label. Hope it will help!
Tuesday, August 4, 2009
To get the Network Card speed on AIX -
There are several ways to get this info.
# netstat -v | grep Speed
Media Speed Selected: Auto negotiation
Media Speed Running: 1000 Mbps Full Duplex
Media Speed Selected: Auto negotiation
Media Speed Running: 1000 Mbps Full Duplex
# entstat -d en0
#for en in `netstat -i | grep en | awk '{print $1}' | sort -u | cut -c3`
>do
> adapter=`echo ent${en}`
> entstat -d ${adapter} | grep "Media Speed"
>done
To Change the Network Card speed on AIX -
#chdev -l en0 -a state=detach --> Detach the interface
#chdev -l ent0 -a media_speed=1000_Full_Duplex --> Make appropriate changes
#chdev -l en0 -a state=up --> Change the step to UP
[NOTE: Don't do ifconfig enX up - this will put an IP address of 0.0.0.0]
#mkdev -l inet0 --> to activate all routes
Hope this will be helpful.
There are several ways to get this info.
# netstat -v | grep Speed
Media Speed Selected: Auto negotiation
Media Speed Running: 1000 Mbps Full Duplex
Media Speed Selected: Auto negotiation
Media Speed Running: 1000 Mbps Full Duplex
# entstat -d en0
#for en in `netstat -i | grep en | awk '{print $1}' | sort -u | cut -c3`
>do
> adapter=`echo ent${en}`
> entstat -d ${adapter} | grep "Media Speed"
>done
To Change the Network Card speed on AIX -
#chdev -l en0 -a state=detach --> Detach the interface
#chdev -l ent0 -a media_speed=1000_Full_Duplex --> Make appropriate changes
#chdev -l en0 -a state=up --> Change the step to UP
[NOTE: Don't do ifconfig enX up - this will put an IP address of 0.0.0.0]
#mkdev -l inet0 --> to activate all routes
Hope this will be helpful.
Subscribe to:
Posts (Atom)