More and Vi (or) More Vi

To edit a file after looking at it with "more"
Press the letter "v" key and you will be placed
in a vi session.

Quitting the vi session will bring you back
to viewing the file with the "more" command

This will not work with piping a file to "more".

This works:

$ more /etc/hosts

This does not:

$ cat /etc/hosts | more

CLEANUP DOS FILES

If you deal with DOS files and the "^M" character
always appears at the end of the line, here are
two ways to get rid of them.

If you edit the DOS text file with the "vi" 
editor in UNIX, use the following from the 
"vi" command line:

:%s/^V^M//g

From a Unix shell use the command:

% sed 's/^V^M//g' pwlist.txt > pwlist.txt_new

NOTE:  ^V is control V and ^M is control M or Enter

awk ...

If you have a server with a lot of file systems on it. Then this little “awk” command can come in useful. It can obviously be used in many other situations also.

In this example, the disk group name is /homedg/.

bash-3.00$ grep homedg /etc/vfstab
/dev/vx/dsk/homedg/sanlogs      /dev/vx/rdsk/homedg/sanlogs     /var/log/sanlogs vxfs    1       yes     -
/dev/vx/dsk/homedg/common      /dev/vx/rdsk/homedg/common     /usr/local/common vxfs    2       yes     -
bash-3.00$

bash-3.00$ awk '/homedg/{print "mount -F "$4,$1,$3}' /etc/vfstab
mount -F vxfs /dev/vx/dsk/homedg/sanlogs /var/log/sanlogs
mount -F vxfs /dev/vx/dsk/homedg/common /usr/local/common
bash-3.00$


If you run it with piping it to 'sh', then it will execute the command. You can replace the 'mount' with whatever command you like.

FORGET THE CRONTAB MAN

For some reason many admins forget the field order of the crontab file and always reference the man pages over-and-over.
Make your life easy. Just put the field definitions in your crontab file and comment (#) the lines out so the crontab file ignores it.

#minute (0-59),
#|      hour (0-23),
#|      |      day of the month (1-31),
#|      |      |      month of the year (1-12),
#|      |      |      |      day of the week (0-6 with 0=Sunday).
#|      |      |      |      |      commands
  0    2     *      *      0,4    /etc/WorkSmart.sh


REGULAR EXPRESSION MATCHING IN AWK


If you ever find yourself typing :
# command | grep pattern | awk '{print $3}'

you can shorten this by using the regexp matching in awk, like this:
# command | awk '/pattern/{print $3}'

Check the VMware Tools time synchronization configuration details


Command to check status:
# vmware-toolbox-cmd timesync status
Disabled
#
If it is enabled, execute the below command and make it disable:
# vmware-toolbox-cmd timesync disable
Disabled

#

Command to check read only file system on Linux

Command to check read only file system on Linux:

$ cat /proc/mounts | grep ro,
/dev/root / ext3 ro,data=ordered 0 0
/dev/VolGroup00/LogVol06 /tmp ext3 ro,data=ordered 0 0
/dev/VolGroup00/LogVol02 /usr ext3 ro,data=ordered 0 0
/dev/VolGroup00/LogVol03 /usr/local ext3 ro,data=ordered 0 0
/dev/VolGroup00/LogVol07 /home ext3 ro,data=ordered 0 0
/dev/VolGroup00/LogVol05 /opt ext3 ro,data=ordered 0 0
/dev/sapwebvg/usrsapP35lv /usr/sap/P35 ext3 ro,data=ordered 0 0
$

$ cat /proc/mounts | grep ro,
/dev/loop1 /RHEL6.6_64 iso9660 ro,relatime 0 0
/dev/loop2 /RHEL7.1_64 iso9660 ro,relatime 0 0

$

Unable to get Virtual console on VMWare at runlevel 3(Redhat Linux).

Issue:
After reboot the Redhat Linux server we are getting the below message and console is hung.
INIT: no more processes left in this runlevel

Solution:
Please make sure the mingetty lines are un-commented on “/etc/inittab” file.

$ cat /etc/inittab | grep -v ^#

id:3:initdefault:
si::sysinit:/etc/rc.d/rc.sysinit
l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
ca::ctrlaltdel:/sbin/shutdown -t3 -r now
pf::powerfail:/sbin/shutdown -f -h +2 "Power Failure; System Shutting Down"
pr:12345:powerokwait:/sbin/shutdown -c "Power Restored; Shutdown Cancelled"

1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

x:5:respawn:/etc/X11/prefdm -nodaemon

$

Solaris 8 Container Build

1.      Identify the server OS that will be used inside zone (container). Create flar image and keep it in any NFS mount which can be accessible from Global.
#flarcreate -n mathiyam-flar-image -c -S -X /var/tmp/flar-exclude-list.out /lhotse2/mathiyam/mathiyam-flar-image.flar

An example of exclude file list would be (exclude /var/tmp, /tmp, /home, any NFS mount and all other non-root filesystem)
/tmp
/var/tmp
/form
/Application
………

2.      Install corresponding Solaris Legacy Container per OS version package from
paris: /export/install/SOL_10_0811_SPARC/solarislegacycontainers/1.0.1/Product in global. Note that this legacy package is required for branded zones only.

SUNWs9brandk à     Solaris 9 Containers: solaris9 brand support RTU
SUNWs8brandk à     Solaris 8 Containers: solaris8 brand support RTU

Login to Global and install the package.
#pkgadd –d . SUNWs8brandk

On successful brand legacy package installation, it will list on the system these packages

root@roja-global # pkginfo |grep SUNWs8brand
system      SUNWs8brandk         Solaris 8 Containers: solaris8 brand support RTU
system      SUNWs8brandr         Solaris 8 Containers: solaris8 brand support (Root)
system      SUNWs8brandu         Solaris 8 Containers: solaris8 brand support (Usr)
root@roja-global #

               Above Root and Usr packages are installed with Solaris 10 by default in global.

3.      Create a diskgroup “zonedg” using any unused free disk and create a filesystem of size at least twice the size of flar image. If more space is possible, add more to it for zone installation. Note that zone will see root (/) and /var out of this filesystem only.

/dev/vx/dsk/zonedg/f4tuna_s8vol
                     104857600 15672919 83612809    16%    /s8_f4tuna_zone_volume
/dev/vx/dsk/zonedg/lpdb9_s8vol
                     104857600 12616796 86478102    13%    /s8_lpdb9_zone_volume
root@pagubali-global

4.      Create a directory for zone path under the above filesystem by zone name which will be used in zone creation time.
Ex. #mkdir /s8_lpdb9_zone_volume/maari ; chmod 700 /s8_lpdb9_zone_volume/maari

5.      Create Zone now

root@pagubali-global # zonecfg -z maari
maari: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:maari> create -b
zonecfg:maari> set zonepath=/s8_lpdb9_zone_volume/maari
zonecfg:maari> set brand=solaris8
zonecfg:maari> set autoboot=false
zonecfg:maari> set bootargs="-m verbose"
zonecfg:maari> set limitpriv=default,dtrace_user,graphics_access,graphics_map,net_rawaccess,proc_priocntl,proc_lock_memory,sys_ipc_config
zonecfg:maari> set scheduling-class=""                   ßWe are using Time Scheduler (TS)**
zonecfg:maari> set ip-type=shared                           ßNIC is shared between two containers
zonecfg:maari> add net
zonecfg:maari:net>
zonecfg:maari:net> set physical=nxge4                   ßThat’s the physical NIC will be used for sharing
zonecfg:maari:net> set address=10.41.138.41
zonecfg:maari:net> end
zonecfg:maari> add capped-memory
zonecfg:maari:capped-memory> set physical=44G              ßAllocating 44gb memory to Maari.
zonecfg:maari:capped-memory> end
zonecfg:maari> add rctl
zonecfg:maari:rctl> set name=zone.max-swap
zonecfg:maari:rctl> add value (priv=privileged,limit= 4294967296,action=deny)
zonecfg:maari:rctl> end
zonecfg:maari> add rctl
zonecfg:maari:rctl> set name=zone.max-locked-memory
zonecfg:maari:rctl> add value (priv=privileged,limit=4294967296,action=deny)
zonecfg:maari:rctl>end
zonecfg:maari> verify
zonecfg:maari> exit

Verify that Maari container is created and status shows configured.

root@pagubali-global # zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   4 pagubali          running    /s8_f4tuna_zone_volume/pagubali solaris8 shared
   - maari           configured /s8_lpdb9_zone_volume/maari solaris8 shared

6.      Install the Flar image now in the designated zone path location.

root@pagubali-global # zoneadm -z maari install -p -a /lhotse2/lpdb9/lpdb9-flar-image.flar
      Log File: /var/tmp/maari.install.853.log
        Source: /lhotse2/lpdb9/lpdb9-flar-image.flar
    Installing: This may take several minutes...
Postprocessing: This may take several minutes...

        Result: Installation completed successfully.
      Log File: /s8_lpdb9_zone_volume/maari/root/var/log/maari.install.853.log
root@pagubali-global #

After Flar image is installed, verify that status shows “installed”.

root@pagubali-global # zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   4 pagubali          running    /s8_f4tuna_zone_volume/pagubali solaris8 shared
   - maari           installed  /s8_lpdb9_zone_volume/maari solaris8 shared
root@pagubali-global #

7.      Creating pool set for CPU allocation to individual container. Here in this example, creating pset and pool for pagubali container.

Pooladm –s  ß create default poolcfg.conf file

root@pagubali-global # poolcfg -c 'create pset pset_pagubali (uint pset.min=32; uint pset.max=32)'
root@pagubali-global # poolcfg -c 'create pool pool_pagubali'
root@pagubali-global # poolcfg -c 'associate pool pool_pagubali (pset pset_pagubali)'
root@pagubali-global #pooladm -c           ß To commit

8.      Add the pool name in zone configuration as shows in this example for pagubali.

root@pagubali-global # zonecfg -z pagubali
zonecfg:pagubali> set pool=pool_pagubali
zonecfg:pagubali> verify
zonecfg:pagubali> commit
zonecfg:pagubali> exit

9.      Boot now zone
#zoneadm –z maari boot               ßFrom Global
#zoneadm list –cv                            ßVerify zone status shows “running”

#zlogin –z maari                ßlogin to container from Global