An example how to use it:
cat adressbook.csv |csv2fritzbox.bash -n 3 -h 9 -o 8 -m 12 -t -d , > fritzboxphonebook
The file fritzboxphonebook can then be imported using the web admin interface of the fritzbox.
The Usage of the script is
Usage: csv2fritzbox.bash -n namecol -h homenumbercol -m mobilenumbercol -o officenumbercol -t -d delimiter -l localcountrypredial -L localareapredial
namecol: The number if the column where the name of the contact is stored
homenumbercol: The number if the column where the home phone number of the contact is stored
mobilenumbercol: The number if the column where the mobile phone number of the contact is stored
officenumbercol: The number if the column where the office phone number of the contact is stored
delimiter: the delimiter character used to separate columns
localcountrypredial: The local country predial. in Switzerland for example 0041 which is default
localareapredial: The predial code used to substitute the local contry predial. in Switzerland for example 0 which is default
-t: There is a title line at the top
[ view entry ] ( 2229 views ) | permalink | print article |





Starting with Veritas Cluster Server 5.0 MP3 there is an official tool "vxfenswap" to change the coordinator disks online. Before there was no such tool and no official statement how to change to coordinator disks while applications could stay online, although there is a simple solution which works normally without any problems.
The steps are:
1) Check the cluster state (LLT, GAB)
root@node1 # lltstat -n
LLT node information:
Node State Links
0 node1 OPEN 3
1 node2 OPEN 3
2 node3 OPEN 3
root@node1 # gabconfig -a
GAB Port Memberships
===============================================================
Port a gen f3b50f membership 012
Port b gen f3b51a membership 012
Port h gen f3b51a membership 012 ^
2) Freeze all service groups and systems persistent.
root@node1 # haconf -makerw
root@node1 # hagrp -list |awk '[{print $1}' | sort -u | while read g ; do hagrp -freeze $g -persistent ; done
VCS WARNING V-16-1-50894 Command (hagrp -freeze -persistent ClusterService ) failed. The Group ClusterService cannot be frozen
root@node1 # hasys -list | while read s ; do hasys -freeze -persistent $s ; done
root@node1 # haconf -dump -makero
3) Stop the cluster monitoring
root@node1 # hastop -all -force
4) Stop the Fencing driver on all cluster nodes
root@nodeXXX # /etc/init.d/vxfen stop
Stopping VxFEN:
5) Stop the GAB driver on all cluster nodes
root@nodeXXX # /etc/init.d/gab stop
Stopping GAB:
6) Stop the LLT driver on all cluster nodes
root@nodeXXX # /etc/init.d/llt stop
Stopping LLT:
7) import the coordinator diskgroup on one node
root@node1 # vxdg -ftC import `cat /etc/vxfendg`
8) Set the coordinator flag off
root@node1 # vxdg -g `cat /etc/vxfendg` set coordinator=off
9) Remove unwanted coordinator disks
root@node1 # vxdg -g `cat /etc/vxfendg` rmdisk <unwanteddiskname>
where unwantetdiskname is the diskname out of the output of the command "vxprint -g `cat /etc/vxfendg`"
10) Add new coordinator disks
root@node1 # vxdctl enable
root@node1 # vxdisksetup -i <newdevicename>
root@node1 # vxdg -g `cat /etc/vxfendg` adddisk <newdiskname>=<newdevicename>
where <newdevicename> is to "DEVICE" out of the output of the command "vxdisk list" and <newdiskname> is a name you choose to name the disk in the diskgroup.
11) Rescan the partitions of the new coordinator disk on all systems
As the partition table is changed when a new disk is initialized by Volume Manager and the other nodes do not know about it, we have to Rescan the partition table on the ohter cluster nodes:
First get the diskid for the new coordinator disk on node1:
root@node1 # vxdisk list <newdevicename> |grep '^disk:'
disk: name= id=1327668260.266.node1
Then create the script rescan_partitions.sh on all other nodes with the following content:
#!/bin/bash
vxdctl enable
vxdisk -o alldgs list|grep `cat /etc/vxfendg`|while read disk rest; do
if vxdisk list $disk|grep $1 >/dev/null ; then
if [ "`uname`" = "Linux" ] ; then
vxdisk list $disk | grep state=enabled |while read dev re ; do
grep $dev /proc/partitions
blockdev --rereadpt /dev/$dev
done
fi
vxdisk rm $disk
vxdctl enable
fi
done
And run it with the diskid as paramter on the other nodes
root@nodexxx # ./rescan_partitions.sh 1327668260.266.node1
12) Set the coordinator flag on
root@node1 # vxdg -g `cat /etc/vxfendg` set coordinator=on
13) deport the coordinator diskgroup
root@node1 # vxdg -g `cat /etc/vxfendg` deport
14) Start LLT on all nodes:
root@nodex # /etc/init.d/llt start
Starting LLT:
LLT: loading module...
WARNING: No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.
LLT: configuring module...
15) Start GAB on all nodes:
root@nodex # /etc/init.d/gab start
Starting GAB:
WARNING: No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.
16) Start Fencing Driver on all nodes
root@nodex # /etc/init.d/vxfen start
Starting VxFEN:
WARNING: No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.
Starting vxfen.. Done
17) Start the cluster monitoring on all nodes
root@nodex # hastart
18) Unfreeze all service groups and systems persistent.
Wait for the cluster startup to complete. Check it with the command below. There shoud be no output:
root@node1 # hastatus -summ|grep '^D'
D BWP Proxy LAN-PBWP node1
D MIP Proxy LAN-PMIP node2
Here some resources where still not probed. Just wait another minute and recheck...
As soon as all resources are probed you can safely unfreeze the servicegroups and systems:
root@node1 # haconf -makerw
root@node1 # hagrp -list |awk '[{print $1}' | sort -u | while read g ; do hagrp -unfreeze $g -persistent ; done
VCS WARNING V-16-1-40202 Group is not persistently frozen
root@node1 # hasys -list | while read s ; do hasys -unfreeze -persistent $s ; done
root@node1 # haconf -dump -makero
[ view entry ] ( 30295 views ) | permalink | print article |





The system call nanosleep uses the system call cv_timedwait_sig to wait. Starting with Solaris 10 update 8 the system call cv_timedwait_sig uses the system call cv_timedwait_sig_hires to wait which then uses gethrtime() to schedule the wake up time. As this high resolution timer can jump on Solaris 10 running on top of a VMware ESX 3.5, VMware vSphere 4.x, VirtualBox virtual maschine with more than 1 CPU assigned, it can happen if the sleep exactly starts when such a jump occurs, that the end time is so far in the future, that it seems to system call is hanging for ever. As soon as one uses truss, pstack, jstack, ... to analyze the live process, the nanosleep call will wake up on a signal and continue.
To see if the high resolution timer is making jumps you can use the following small program gethrtime_test.c to test:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main(int argc, char** argv)
{
hrtime_t start, end;
long iters = 1000000000;
hrtime_t delta;
delta=10000000;
if (argc > 1) iters = atol(argv[1]);
if (argc > 2) delta = atol(argv[2]);
printf("%ld iterations\n", iters);
start = gethrtime();
long i;
for (i = 0; i < iters; i++)
{
end = gethrtime();
if ( start > end || start+delta < end)
printf("%ld:\n start %lld\n end %lld\n diff %lld\n",
i, start, end, (end-start));
start=end;
}
exit(0);
}
Create the file called gethrtime_test.c and compile it
root@hostname# gcc -o gethrtime_test gethrtime_test.c
Then let it run.
root@hostname# ./gethrtime_test
1000000000 iterations
You should not see any message like
5750624:
start 415551665195
end 415150888326
diff -400776869
6387810:
start 416494513021
end 416509397658
diff 14884637
Especially negative jumps should never happen.
If you see negative jumps, you can set the following paramters in a VMware virtual machines vmx file and reboot to avoid this:
VMware ESX 3.5
monitor_control.disable_tsc_offsetting=TRUE
monitor_control.disable_rdtscopt_bt=TRUE
VMware vSphere 4.x
timeTracker.forceMonotonicTTAT=TRUE
On VirtualBox I do not know any solution yet.
[ view entry ] ( 1172 views ) | permalink | print article |





Oracle Database corruption after crashing a VMware Instance running Solaris on a UFS or ZFS Filesystem
When running a Oracle Databases on a UFS or ZFS filesystem on Solaris 10 X86 on VMware virtual machine, the default values of all the stacks involved (Oracle Database, Solaris, VMware) can cause a database corruption, each time the VMware virtual machine is powered off without cleanly shut down the operating system. This happens if the power off button in virtual center is used to forcibly power off the virtual machine or if there is a basic hardware failure causing the physical node to crash.This is true for Solaris 10 upto u10, VMware ESX 3.5 through to vSphere 4.1, UFS and ZFS, Oracle 10 through to Oracle 11.
To avoid this there are 2 solutions:
1) use the forcedirectio option for all filesystems where database files (redo log members, data files, controlfiles) in /etc/vfstab:
/dev/md/dsk/d20 /dev/md/rdsk/d20 /u01 ufs 3 yes forcedirectio
2) use the oracle init parameter filesystemio_options to force directio on all database instances:
oracle@hostname> sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 13 13:32:08 2012
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> alter system set filesystemio_options=setall scope=spfile;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 267825152 bytes
Fixed Size 1335924 bytes
Variable Size 130026892 bytes
Database Buffers 130023424 bytes
Redo Buffers 6438912 bytes
Database mounted.
Database opened.
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[ view entry ] ( 3353 views ) | permalink | print article |




