Steps to grow the filesystem additional 30Gb in SVM softpartition.(without data loss)

Steps to grow the filesystem additional 30Gb in SVM softpartition.

1. Adding 30Gb new disk into d370 metadevice.
2. Attach 30Gb(grow size) to d31 softpartition.
3. Grow the d371 softpartition.
4. Check the result.


root:/ > metastat d371
d371: Soft Partition
Device: d370
State: Okay
Size: 71303168 blocks (34 GB)
Extent Start Block Block count
0 1952 71303168

d370: Concat/Stripe
Size: 72888960 blocks (34 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower32a 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower33g 1920 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower34a 1920 No Okay No

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/emcpower32a No -
/dev/dsk/emcpower33g No -
/dev/dsk/emcpower34a No -


root:/ > metastat d370
d370: Concat/Stripe
Size: 72888960 blocks (34 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower32a 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower33g 1920 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower34a 1920 No Okay No

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/emcpower32a No -
/dev/dsk/emcpower33g No -
/dev/dsk/emcpower34a No -



root:/ > metattach d370 c2t5006016041E0F15Bd2s2
d370: component is attached

root:/ > metastat d370
d370: Concat/Stripe
Size: 135799680 blocks (64 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower32a 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower33g 1920 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower34a 1920 No Okay No
Stripe 3:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c2t5006016041E0F15Bd2s2 1280 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/emcpower32a No -
/dev/dsk/emcpower33g No -
/dev/dsk/emcpower34a No -
/dev/dsk/c2t5006016041E0F15Bd2 Yes id1,ssd@n60060160f8122300a2c36a1c88dae011


root:/ > df -h /stag
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d371 33G 17G 16G 51% /stag

root:/ > metattach d371 30g
d371: Soft Partition has been grown
root:/ > metastat d371
d371: Soft Partition
Device: d370
State: Okay
Size: 134217728 blocks (64 GB)
Extent Start Block Block count
0 1952 134217728

d370: Concat/Stripe
Size: 135799680 blocks (64 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower32a 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower33g 1920 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/emcpower34a 1920 No Okay No
Stripe 3:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c2t5006016041E0F15Bd2s2 1280 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
/dev/dsk/emcpower32a No -
/dev/dsk/emcpower33g No -
/dev/dsk/emcpower34a No -
/dev/dsk/c2t5006016041E0F15Bd2 Yes id1,ssd@n60060160f8122300a2c36a1c88dae011
root:/ >


root:/ > growfs -M /stag /dev/md/rdsk/d371
/dev/md/rdsk/d371: Unable to find Media type. Proceeding with system determined parameters.
Warning: 4096 sector(s) in last cylinder unallocated
/dev/md/rdsk/d371: 134217728 sectors in 21846 cylinders of 48 tracks, 128 sectors
65536.0MB in 1366 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...........................
super-block backups for last 10 cylinder groups at:
133301792, 133400224, 133498656, 133597088, 133695520, 133793952, 133892384,
133990816, 134089248, 134187680

root:/ > df -k /stag
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d371 66092522 17594518 48146890 27% /stag


root:/ > df -h /stag
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d371 63G 17G 46G 27% /stag
root:/ >

Collecting Livecore dump in Solaris

Introduction:

“Live core” is the crash dump of the running Solaris system, without rebooting or modifying the system in any way. Here mainly we are using two commands for collecting core dump "dumpadm" and "savecore".


dumpadm & savecore :

The dumpadm program is an administrative command that manages the configuration of the operating system crash dump facility. A crash dump is a disk copy of the physical memory of the computer at the time of a fatal system error. When a fatal operating system error occurs, a message describing the error is printed to the console. The operating system then generates a crash dump by writing the contents of physical memory to a predetermined dump device, which is typically a local disk partition. The dump device can be configured by way of dumpadm. Once the crash dump has been written to the dump device, the system will reboot.

Fatal operating system errors can be caused by bugs in the operating system, its associated device drivers and loadable modules, or by faulty hardware. Whatever the cause, the crash dump itself provides invaluable information to your support engineer to aid in diagnosing the problem. As such, it is vital that the crash dump be retrieved and given to your support provider. Following an operating system crash, the savecore utility is executed automatically during boot to retrieve the crash dump from the dump device, and write it to a pair of files in your file system named unix.X and vmcore.X, where X is an integer identifying the dump. Together, these data files form the saved crash dump. The directory in which the crash dump is saved on reboot can also be configured using dumpadm.


How to check physical memory size:

root@vaigai # prtconf | grep Memory
Memory size: 1024 Megabytes
root@vaigai #


Identify the current crash dump configuration:

root@vaigai # dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c0d0s1 (swap)  Current dump device is swap
Savecore directory: /var/crash/vaigai
Savecore enabled: yes
root@vaigai #
Configuring the core dump device (good to have its size same as physical
memory size) :

root@vaigai # mkfile -n 1g /livecore/lcfile
root@vaigai # cd livecore/
root@vaigai # ls -l
total 48
-rw------T 1 root root 1073741824 Jan 27 05:01 lcfile
root@vaigai #

Reconfiguring the Dump device to a dedicated Dump device (using file):

root@vaigai # dumpadm -d /livecore/lcfile
Dump content: kernel pages
Dump device: /livecore/lcfile (dedicated)
Savecore directory: /var/crash/vaigai
Savecore enabled: yes
root@vaigai #

Generate live core using savecore:

root@vaigai # savecore -L
dumping to /livecore/lcfile, offset 65536, content: kernel
100% done: 27895 pages dumped, compression ratio 3.11, dump succeeded
System dump time: Wed Jan 27 05:04:19 2010
Constructing namelist /var/crash/vaigai/unix.0
Constructing corefile /var/crash/vaigai/vmcore.0
100% done: 27895 of 27895 pages saved
root@vaigai #

Core dump generated two files: vmcore.0 and unix.0 :

root@vaigai # ls -l /var/crash/vaigai/
total 231058
-rw-r--r-- 1 root root 2 Jan 27 05:04 bounds
-rw-r--r-- 1 root root 1396825 Jan 27 05:04 unix.0
-rw-r--r-- 1 root root 116826112 Jan 27 05:04 vmcore.0
root@vaigai #


Generate live core using savecore in different directory :

root@vaigai # mkdir /nividya
root@vaigai # savecore -L /nividya
dumping to /livecore/lcfile, offset 65536, content: kernel
100% done: 29126 pages dumped, compression ratio 2.94, dump succeeded
System dump time: Wed Jan 27 05:08:18 2010
Constructing namelist /nividya/unix.0
Constructing corefile /nividya/vmcore.0
100% done: 29126 of 29126 pages saved
root@vaigai #]

root@vaigai # ls -l /nividya
total 240930
-rw-r--r-- 1 root root 2 Jan 27 05:08 bounds
-rw-r--r-- 1 root root 1396825 Jan 27 05:08 unix.0
-rw-r--r-- 1 root root 121872384 Jan 27 05:08 vmcore.0
root@vaigai #


Using dumpadm to set dumpdevice as swap :

root@vaigai # dumpadm -d swap
Dump content: kernel pages
Dump device: /dev/dsk/c0d0s1 (swap)
Savecore directory: /var/crash/vaigai
Savecore enabled: yes
root@vaigai #

Reconfiguring the Dump device to a dedicated Dump device (using partition):

root@vaigai # dumpadm -d /dev/dsk/c2t1d0s0
Dump content: kernel pages
Dump device: /dev/dsk/c2t1d0s0 (dedicated)
Savecore directory: /var/crash/vaigai
Savecore enabled: yes
root@vaigai #

Modify the dump configuration to automatically run savecore on reboot. This
is the default for this dump setting.

root@vaigai # dumpadm -y
Dump content: kernel pages
Dump device: /dev/dsk/c2t1d0s0 (dedicated)
Savecore directory: /var/crash/vaigai
Savecore enabled: yes
root@vaigai #

Modify the dump configuration to not run savecore automatically on reboot.
This is not the recommended system configuration.

root@vaigai # dumpadm -n
Dump content: kernel pages
Dump device: /dev/dsk/c2t1d0s0 (dedicated)
Savecore directory: /var/crash/vaigai
Savecore enabled: no
root@vaigai #