<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:ref="http://purl.org/rss/1.0/modules/reference/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/">
	<channel rdf:about="https://www.doerzbach.com/rdf.php/rss.rdf">
		<title>Doerzbach Engineering GmbH</title>
		<link>https://www.doerzbach.com/rdf.php/index.php</link>
		<description><![CDATA[Doerzbach Engineering GmbH, Himmelrichstrasse 14, 6340 Baar, Switzerland]]></description>
		<items>
			<rdf:Seq>
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry200329-194818" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry180713-051522" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry161006-151159" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry151205-190527" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry150722-203255" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry140411-201542" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry131027-074618" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry120408-120235" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry120205-112643" />
				<rdf:li resource="https://www.doerzbach.com/rdf.php/index.php?entry=entry120113-124240" />
			</rdf:Seq>
		</items>
	</channel>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry200329-194818">
		<title>COVID 19 simulation</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry200329-194818</link>
		<description><![CDATA[<h1>Covid 19 simulation</h1><br /><br />As I did not found yet an open source covid 19 simulation, I tried to write one myself.<br />The result can be found on github <a href="https://github.com/adoerzbach/covid19simulation" >https://github.com/adoerzbach/covid19simulation</a>. I know that no simulation really is able to predict the future of the pandemic exactly. But with a simulation you can at least try to predict the outcome of some scenarios in a scientific manner. I checked the model on the data available from the epidemic outbreak at Wuhan. It is quite near the data which I was able to find on the internet. For Switzerland I was not able to get well validated data for some parameters. The data provided is missing key figures for: How many people infected are there? How is the age structure and health condition of people hospitalized, died and having mild symptoms?<br />If this data would be available a much better prediction of the outcome would be possible. But I tried to get some realistic estimates from the data available on cases on cruising ships <a href="https://en.wikipedia.org/wiki/2020_coronavirus_pandemic_on_cruise_ships" >2020 coronavirus pandemic on cruise ships</a> and some other resources.<br />If you have any better data and would like to check the model, feel free.]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry180713-051522">
		<title>Solaris 10 Ldap authentication with Windows AD server</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry180713-051522</link>
		<description><![CDATA[<h1>Introduction</h1><br />Well documented is the User Management of Solaris using a Directory Server. Thisis documented in the Solaris manuals. <br />How to do a SSO using a Windows AD Server is also documented on various sites.<br />What is less obvious and not documented is how to just authenticate (check passwords) using ldap. Thats what should be documented in this post.<br /><br /><h1>Setup</h1><br /><br /><h1>Bind User in Windows AD Server</h1><br /><br />First you need to create a bind user on the Window AD Server who has the rights to lookup all usernames, which should be able to authenticate.<br /><br />Setup a user:<br /><pre><br />cn=unixbind,ou=USER system,ou=yyy,dc=xxx,dc=com<br /></pre><br />This user needs only to have read-only rights on the user objects. It is used to lookup the users cn using the sAMAccountName field to do the bind.<br /><br /><h1>Import the CA-Cert into the certdb</h1><br /><br />If you are going to use ldaps you need the Certification authorithy certificate. This can be a root ca of you own organisation or a ca from an official authority like verisign, which was used to sign you certificate.<br /><br />To find the authority cert use:<br /><br /><pre><br />Armins-Air:Desktop ado$ openssl x509 -noout -text -in google.crt <br />Certificate:<br />    Data:<br />        Version: 3 (0x2)<br />        Serial Number: 8057627504494046370 (0x6fd2718631c8eca2)<br />    Signature Algorithm: sha256WithRSAEncryption<br />        Issuer: C=US, O=Google Trust Services, CN=Google Internet Authority G3<br />        ...<br />        ...<br /></pre><br />Here it would be &quot;Google Internet Authority G3&quot;<br /><br />As this is an official Trust authority you can find the cert in your web browser and export it. <br /><br />For Firefox: Preferences -&gt; Privacy &amp; Security -&gt; Certificates -&gt; View Certificates -&gt; Authorities.<br /><br />The cert found for Google Internet Authority G3 was:<br /><pre><br />Armins-Air:Desktop ado$ openssl x509 -noout -in GoogleInternetAuthorityG3.crt -text<br />Certificate:<br />    Data:<br />        Version: 3 (0x2)<br />        Serial Number:<br />            01:e3:a9:30:1c:fc:72:06:38:3f:9a:53:1d<br />    Signature Algorithm: sha256WithRSAEncryption<br />        Issuer: OU=GlobalSign Root CA - R2, O=GlobalSign, CN=GlobalSign<br /><br /><pre><br />This cert was issued by GlobalSign Root CA - R2. So we get this cert from the browser too and then we check the trust chain using openssl:<br /><br />We put the two certs in one file:<br /><pre><br />Armins-Air:Desktop ado$ cat GoogleInternetAuthorityG3.crt GlobalSignRootCA-R2.crt &gt; ca.crt<br /></pre><br />and check then the cert of google.<br /><pre><br />Armins-Air:Desktop ado$ openssl verify -CAfile ca.crt google.crt <br />google.crt: OK<br /></pre><br /><br /><br />Then create a empty certdb:<br /><pre><br />/usr/sfw/bin/certutil -N -d /var/ldap<br /></pre><br />Then import the ca cert into the db:<br /><pre><br />/usr/sfw/bin/certutil -A -d /var/ldap -i ca.crt -n xxxx.com -t &#039;C,C,C&#039;<br /></pre><br />The binary /usr/sfw/bin/certutil is installed by the package SUNWtlsu<br /><br /><h1>Change the pam.conf</h1><br /><br />You can use ldap authentication for all logins. This is done using this &quot;other section&quot;:<br /><pre><br />other   auth requisite        pam_authtok_get.so.1<br />other   auth required         pam_dhkeys.so.1<br />other   auth required         pam_unix_cred.so.1<br />other   auth sufficient         pam_ldap.so.1<br />other   auth binding          pam_unix_auth.so.1 server_policy<br /></pre><br />Or you can use it only for ssh:<br />/<br /><pre><br />sshd-kbdint    auth requisite          pam_authtok_get.so.1<br />sshd-kbdint    auth required           pam_dhkeys.so.1<br />sshd-kbdint    auth required           pam_unix_cred.so.1<br />sshd-kbdint   auth binding         pam_ldap.so.1<br />sshd-password    auth requisite          pam_authtok_get.so.1<br />sshd-password    auth required           pam_dhkeys.so.1<br />sshd-password    auth required           pam_unix_cred.so.1<br />sshd-password   auth binding         pam_ldap.so.1<br /></pre><br />Then on the console no ldap authentication is used.<br /><br /><h1>Setup ldap client</h1><br /><pre><br />ldapclient manual \<br />-a domainName=xxx.com \<br />-a credentialLevel=proxy \<br />-a authenticationMethod=tls:simple \<br />-a defaultSearchBase=dc=xxx,dc=com \<br />-a &quot;proxyDN=cn=unixbind,ou=USER system,ou=LI,dc=xxx,dc=com&quot; \<br />-a proxyPassword=wassimmer \<br />-a objectClassMap=passwd:posixaccount=user \<br />-a attributeMap=passwd:uid=sAMAccountName \<br />-a serviceSearchDescriptor=passwd:dc=xxx,dc=com?sub \<br />-a followReferrals=false \<br />-a preferredServerList=&quot;10.10.1.1&quot;<br /><pre><br /> <br />This will create to files in /var/ldap :<br /><br />/var/ldap/ldap_client_cred: Bind user with its encrypted password<br />/var/ldap/ldap_client_file: The configration of the ldap service<br /><br /><br /><h1>Start ldap Service</h1><br />Now we just have to enable the ldap service:<br /><br /><pre><br />svcadm enable ldap/client<br /></pre><br />check the ldap service now:<br /><pre><br />xxx@yyy&gt; /usr/lib/ldap/ldap_cachemgr -g<br /><br />cachemgr configuration:<br />server debug level          0<br />server log file &quot;/var/ldap/cachemgr.log&quot;<br />number of calls to ldapcachemgr         63<br /><br />cachemgr cache data statistics:<br />Configuration refresh information: <br />  Previous refresh time: 2018/07/06 19:46:58<br />  Next refresh time:     2018/07/12 19:47:11<br />Server information: <br />  Previous refresh time: 2018/07/12 07:47:11<br />  Next refresh time:     2018/07/12 10:57:02<br />  server: xx.yy.zz.66, xyz, status: UP<br />Cache data information: <br />  Maximum cache entries:          256<br />  Number of cache entries:          0<br /></pre><br />]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry161006-151159">
		<title>Clone Pluggable Database using zfs clones</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry161006-151159</link>
		<description><![CDATA[Fast cloning using zfs snapshots and clones is very simple for pluggable databases. <br /><br />In this example a pluggable database PDB1 is cloned to a pluggable database PDB4. All database files are located in /u01/PDB1 which is a zfs filesystem.<br /><br />First you need to create an xml description of the pluggable database<br /><pre><br />SQL&gt; alter pluggable database pdb1 close;<br />SQL&gt; alter pluggable database pdb1 open read only;<br />SQL&gt; alter session set container=PDB1;<br />SQL&gt; exec dbms_pdb.describe(pdb_descr_file=&gt;&#039;/u00/pdb1.xml&#039;);<br /></pre><br /><br />Then you have to create the snapshot and clone filesystems:<br /><pre><br />oracle@devx02e:CDB1:/u00# /usr/sbin/zfs snapshot devx02e_data/u01/PDB1@PDB4<br />oracle@devx02e:CDB1:/u00# /usr/sbin/zfs clone devx02e_data/u01/PDB1@PDB4 devx02e_data/u01/PDB4<br /></pre><br /><br />Then create the database using the following commands<br /><br /><pre><br />SQL&gt; create pluggable database pdb4 AS CLONE using &#039;/u00/pdb1.xml&#039; source_file_name_convert=(&#039;/u01/PDB1&#039;,&#039;/u01/PDB4&#039;) nocopy tempfile reuse; <br /></pre><br /><br />Then you just need to open the pluggable databases:<br /><pre><br />SQL&gt; alter pluggable database pdbspe1 open force;<br />SQL&gt; alter pluggable database pdbspe4 open;<br /></pre>]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry151205-190527">
		<title>ZPOOL hangs during rollback of a zfs snapshot</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry151205-190527</link>
		<description><![CDATA[Starting with Kernel Patch 150401-09 of Oracle we experienced hangs of the whole zpool when we did a rollback of a snapshot. <br />Up to now (2015/12/05) there is no fix available for this problem. Last tests with Kernel patch 150401-28 where not successful.<br />The hangs occur if we had to rollback a snapshot of cloned filesystem.    <br />Here the setup of our filesystems:<br /><pre><br />testsystem_data                                              11.5T  2.22T    31K  legacy<br />testsystem_data/u01                              9.65T  2.22T    46K  /zones/testsystem/root/u01<br />testsystem_data/u01/DB1                      8.77T  2.22T  2.19T  /zones/testsystem/root/u01/DB1<br />testsystem_data/u01/DB1@DB5              2.18T      -  2.18T  -<br />testsystem_data/u01/DB1@DB8              2.19T      -  2.19T  -<br />testsystem_data/u01/DB1@DB4              1.72G      -  2.19T  -<br />testsystem_data/u01/DB1@DB7               223M      -  2.19T  -<br />testsystem_data/u01/DB1@DB6               216M      -  2.19T  -<br />testsystem_data/u01/DB1@DB2               242M      -  2.19T  -<br />testsystem_data/u01/DB1@DB3               268M      -  2.19T  -<br />testsystem_data/u01/DB1@DB9              1.72G      -  2.19T  -<br />testsystem_data/u01/DB2                      95.3G  2.22T  2.26T  /zones/testsystem/root/u01/DB2<br />testsystem_data/u01/DB2@db2_after_clone  8.81G      -  2.19T  -<br />testsystem_data/u01/DB3                      95.5G  2.22T  2.26T  /zones/testsystem/root/u01/DB3<br />testsystem_data/u01/DB3@db3_after_clone  8.87G      -  2.19T  -<br />testsystem_data/u01/DB4                       209G  2.22T  2.28T  /zones/testsystem/root/u01/DB4<br />testsystem_data/u01/DB5                      30.8G  2.22T  2.18T  /zones/testsystem/root/u01/DB5<br />testsystem_data/u01/DB5@db5_after_clone  7.53G      -  2.18T  -<br />testsystem_data/u01/DB6                      91.9G  2.22T  2.26T  /zones/testsystem/root/u01/DB6<br />testsystem_data/u01/DB6@db6_after_clone  8.75G      -  2.19T  -<br />testsystem_data/u01/DB7                       128G  2.22T  2.26T  /zones/testsystem/root/u01/DB7<br />testsystem_data/u01/DB7@db7_after_clone  9.14G      -  2.19T  -<br />testsystem_data/u01/DB8                       163G  2.22T  2.26T  /zones/testsystem/root/u01/DB8<br />testsystem_data/u01/DB8@db8_after_clone  9.27G      -  2.19T  -<br />testsystem_data/u01/DB9                      92.0G  2.22T  2.26T  /zones/testsystem/root/u01/DB9<br />testsystem_data/u01/DB9@db9_after_clone  8.78G      -  2.19T  -<br /></pre><br />If we had to rollback a snapshot testsystem_data/u01/DB7@db7_after_clone we experienced long hang times. Sometimes the whole pool was blocked for several minutes.<br />The filesystem testsystem_data/u01/DB7 is a clone of the snapshot testsystem_data/u01/DB1@DB7.<br /><br />After one year of testing all possible IDR Patches and Kernel Patches we found a simple workaround:<br /><br />First delete all files in the Filesystem which you want to rollback. E.g. if you want to rollback testsystem_data/u01/DB7@db7_after_clone then first delete all files in  /zones/testsystem/root/u01/DB7 and then run the rollback:<br /><pre><br />rm -r /zones/testsystem/root/u01/DB7<br />zfs rollback testsystem_data/u01/DB7@db7_after_clone<br /></pre><br />The &quot;rm -r&quot; command will take a while, depending on the size of the Filesystem (for 2TB about 40 Minutes), but then the &quot;zfs rollback&quot; will only take a few seconds, and the zpool will never hang during the whole procedure.]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry150722-203255">
		<title>Solaris rescan SCSI device on VMware</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry150722-203255</link>
		<description><![CDATA[If you want to add a SCSI device on a VMware virtual maschine which is running Solaris you normally just have to use the following command to see the new device<br /><pre>devfsadm</pre><br />If you do not see the disk from Solaris then you just have to use the following commands: <br /><pre>root@xxxxx # cfgadm -al<br />Ap_Id                          Type         Receptacle   Occupant    <br />Condition<br />c1                             scsi-bus     connected    configured   unknown<br />c1::dsk/c1t0d0                 disk         connected    configured   unknown<br />c1::dsk/c1t1d0                 disk         connected    configured   unknown<br />c1::dsk/c1t2d0                 disk         connected    configured   unknown<br />c1::dsk/c1t3d0                 disk         connected    configured   unknown<br />pcie160                        etherne/hp   connected    configured   ok<br />..<br />..<br />pcie263                        unknown      empty        unconfigured unknown<br />root@xxxxx # cfgadm -x reset_all c1<br />root@xxxxx # devfsadm<br />root@xxxxx # cfgadm -al<br />Ap_Id                          Type         Receptacle   Occupant    <br />Condition<br />c1                             scsi-bus     connected    configured   unknown<br />c1::dsk/c1t0d0                 disk         connected    configured   unknown<br />c1::dsk/c1t1d0                 disk         connected    configured   unknown<br />c1::dsk/c1t2d0                 disk         connected    configured   unknown<br />c1::dsk/c1t3d0                 disk         connected    configured   unknown<br />c1::dsk/c1t4d0                 disk         connected    configured   unknown<br />pcie160                        etherne/hp   connected    configured   ok<br />...<br /><br /></pre>]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry140411-201542">
		<title>Solaris rescan SAN devices</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry140411-201542</link>
		<description><![CDATA[Rescan for new SAN devices can be done using the command<br /><br />cfgadm -al<br /><br />If a completely new storage system (e.g. Hitache Storage Systems)  is  added to the SAN, the storage may not be accessible even after using the command above. A relogin of each host bus adapter of the server in to the SAN is needed. Either you reboot the system or you use<br /><br />luxadm -e forcelip /dev/cfg/cX; sleep 10 ; cfgadm -al<br /><br />for each SAN adapter listed in the cfgadm -al output. Wait a few seconds between after each host bus adapter.<br /><br />Or just do a relogin on each HBA port in use by the following commands<br /><pre><br />luxadm -e port |while read port rest ; do <br />luxadm -e forcelip $port <br />sleep 30 <br />cfgadm -al <br />sleep 30<br />done<br />devfsadm<br /></pre>]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry131027-074618">
		<title>Backup to the disaster site using ZFS Replication</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry131027-074618</link>
		<description><![CDATA[ZFS replication can be used to create a backup to the disaster site using a low bandwidth link. The replication is based on ZFS snapshots and only the amount of changed blocks between two snapshots on the filesystem need to be transferred to the disaster site after an initial transfer of all the data.<br />It can be used to replicate nearly all application data in a consistent manner as creating a snapshot is quick atomic operation. Therefore you can either shutdown the application or just put the application in a backup mode for a few seconds. If the data is stored in only one zfs filesystem the data should even be consistent without stopping or putting the application in a special mode, if the application is able to recover automatically after system crash.<br />A simple solution is the tool <a href="http://www.doerzbach.com/static.php?page=static130330-073527" >ZFS Replication</a> on this site.<br /><br /><br />]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry120408-120235">
		<title>From a CSV-Addresslist to a Fritzbox Phonebook (CSV2Fritzbox)</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry120408-120235</link>
		<description><![CDATA[If you want to convert a CSV-addressbook into a Phonebook for importing into your fritzbox you can use the following <a href="http://www.doerzbach.com/downloads/csv2fritzbox.bash" >script</a><br /><br />An example how to use it:<br /><pre><br />cat adressbook.csv |csv2fritzbox.bash -n 3 -h 9 -o 8 -m 12 -t -d , &gt; fritzboxphonebook<br /></pre><br />The file fritzboxphonebook can then be imported using the web admin interface of the fritzbox.<br /><br />The Usage of the script is<br /><pre><br />Usage: csv2fritzbox.bash -n namecol -h homenumbercol -m mobilenumbercol -o officenumbercol -t -d delimiter -l localcountrypredial -L localareapredial<br />namecol: The number if the column where the name of the contact is stored<br />homenumbercol: The number if the column where the home phone number of the contact is stored<br />mobilenumbercol: The number if the column where the mobile phone number of the contact is stored<br />officenumbercol: The number if the column where the office phone number of the contact is stored<br />delimiter: the delimiter character used to separate columns<br />localcountrypredial: The local country predial. in Switzerland for example 0041 which is default<br />localareapredial: The predial code used to substitute the local contry predial. in Switzerland for example 0 which is default<br />-t: There is a title line at the top<br /></pre><br />]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry120205-112643">
		<title>Changing Coordinator Disks online in Veritas Cluster Server (VCS) without vxfenswap</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry120205-112643</link>
		<description><![CDATA[Starting with Veritas Cluster Server 5.0 MP3 there is an official tool &quot;vxfenswap&quot; to change the coordinator disks online. Before there was no such tool and no official statement how to change to coordinator disks while applications could stay online, although there is a simple solution which works normally without any problems.<br /><br />The steps are:<br /><br />1) Check the cluster state (LLT, GAB)<br /><pre><br />root@node1 # lltstat -n<br />LLT node information:<br />    Node                 State    Links<br />     0 node1              OPEN        3<br />     1 node2              OPEN        3<br />     2 node3              OPEN        3<br />root@node1 # gabconfig -a<br />GAB Port Memberships<br />===============================================================<br />Port a gen   f3b50f membership 012                       <br />Port b gen   f3b51a membership 012<br />Port h gen   f3b51a membership 012                       ^<br /></pre><br />2) Freeze all service groups and systems persistent.<br /><pre><br />root@node1 # haconf -makerw<br />root@node1 # hagrp -list |awk &#039;[{print $1}&#039; | sort -u | while read g ; do hagrp -freeze $g -persistent ; done<br />VCS WARNING V-16-1-50894 Command (hagrp -freeze -persistent ClusterService ) failed. The Group ClusterService cannot be frozen<br /><br />root@node1 # hasys -list | while read s ; do hasys -freeze -persistent $s ; done<br />root@node1 # haconf -dump -makero<br /></pre><br /><br />3) Stop the cluster monitoring<br /><pre><br />root@node1 # hastop -all -force<br /></pre><br />4) Stop the Fencing driver on all cluster nodes<br /><pre><br />root@nodeXXX # /etc/init.d/vxfen stop<br />Stopping VxFEN:<br /></pre><br />5) Stop the GAB driver on all cluster nodes<br /><pre><br />root@nodeXXX # /etc/init.d/gab stop<br />Stopping GAB:<br /></pre><br />6) Stop the LLT driver on all cluster nodes<br /><pre><br />root@nodeXXX # /etc/init.d/llt stop<br />Stopping LLT:<br /></pre><br />7) import the coordinator diskgroup on one node<br /><pre><br />root@node1 # vxdg -ftC import `cat /etc/vxfendg`<br /></pre><br />8) Set the coordinator flag off<br /><pre><br />root@node1 # vxdg -g `cat /etc/vxfendg` set coordinator=off<br /></pre><br />9) Remove unwanted coordinator disks<br /><pre><br />root@node1 # vxdg -g `cat /etc/vxfendg` rmdisk &lt;unwanteddiskname&gt;<br /></pre><br />where unwantetdiskname is the diskname out of the output of the command &quot;vxprint -g `cat /etc/vxfendg`&quot;<br />10) Add new coordinator disks<br /><br /><br /><pre><br />root@node1 # vxdctl enable<br />root@node1 # vxdisksetup -i &lt;newdevicename&gt;<br />root@node1 # vxdg -g `cat /etc/vxfendg` adddisk &lt;newdiskname&gt;=&lt;newdevicename&gt;<br /></pre><br />where &lt;newdevicename&gt; is to &quot;DEVICE&quot; out of the output of the command &quot;vxdisk list&quot; and &lt;newdiskname&gt; is a name you choose to name the disk in the diskgroup.<br />11) Rescan the partitions of the new coordinator disk on all systems<br />As the partition table is changed when a new disk is initialized by Volume Manager and the other nodes do not know about it, we have to Rescan the partition table on the ohter cluster nodes:<br /><br />First get the diskid for the new coordinator disk on node1:<br /><pre><br />root@node1 # vxdisk list &lt;newdevicename&gt; |grep &#039;^disk:&#039;<br />disk:      name= id=1327668260.266.node1<br /></pre><br />Then create the script rescan_partitions.sh on all other nodes with the following content:<br /><pre><br />#!/bin/bash<br /><br />vxdctl enable<br />vxdisk -o alldgs list|grep `cat /etc/vxfendg`|while read disk rest; do<br />  if vxdisk list $disk|grep $1 &gt;/dev/null ; then                 <br />    if [ &quot;`uname`&quot; = &quot;Linux&quot; ] ; then<br />      vxdisk list $disk | grep state=enabled |while read dev re ; do<br />        grep $dev /proc/partitions<br />        blockdev --rereadpt /dev/$dev<br />      done<br />     fi<br />     vxdisk rm $disk<br />     vxdctl enable<br />  fi<br />done<br /></pre><br />And run it with the diskid as paramter on the other nodes<br /><pre><br />root@nodexxx # ./rescan_partitions.sh 1327668260.266.node1<br /></pre><br />12) Set the coordinator flag on<br /><pre><br />root@node1 # vxdg -g `cat /etc/vxfendg` set coordinator=on <br /></pre><br />13) deport the coordinator diskgroup<br /><pre><br />root@node1 # vxdg -g `cat /etc/vxfendg` deport<br /></pre><br />14) Start LLT on all nodes:<br /><pre><br />root@nodex # /etc/init.d/llt start<br />Starting LLT: <br />LLT: loading module...<br />WARNING:  No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.<br />LLT: configuring module...<br /></pre><br />15) Start GAB on all nodes:<br /><pre><br />root@nodex # /etc/init.d/gab start<br />Starting GAB: <br />WARNING:  No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.<br /></pre><br /><br />16) Start Fencing Driver on all nodes<br /><pre><br />root@nodex # /etc/init.d/vxfen start<br />Starting VxFEN: <br />WARNING:  No modules found for 2.6.9-55.ELsmp, using compatible modules for 2.6.9-34.ELsmp.<br />Starting vxfen.. Done<br /></pre><br />17) Start the cluster monitoring on all nodes<br /><pre><br />root@nodex # hastart<br /></pre><br />18) Unfreeze all service groups and systems persistent.<br />Wait for the cluster startup to complete. Check it with the command below. There shoud be no output:<br /><br /><pre><br />root@node1 # hastatus -summ|grep &#039;^D&#039;<br />D  BWP          Proxy                LAN-PBWP           node1                <br />D  MIP          Proxy                LAN-PMIP           node2                <br /></pre><br />Here some resources where still not probed. Just wait another minute and recheck...<br />As soon as all resources are probed you can safely unfreeze the servicegroups and systems:<br /><pre><br />root@node1 # haconf -makerw<br />root@node1 # hagrp -list |awk &#039;[{print $1}&#039; | sort -u | while read g ; do hagrp -unfreeze $g -persistent ; done<br />VCS WARNING V-16-1-40202 Group is not persistently frozen<br />root@node1 # hasys -list | while read s ; do hasys -unfreeze -persistent $s ; done<br />root@node1 # haconf -dump -makero<br /></pre>]]></description>
	</item>
	<item rdf:about="https://www.doerzbach.com/rdf.php/index.php?entry=entry120113-124240">
		<title>VMware and Virtualbox timer causes on Solaris 10 update 8 and later nanosleep to hang for ever</title>
		<link>https://www.doerzbach.com/rdf.php/index.php?entry=entry120113-124240</link>
		<description><![CDATA[The system call nanosleep uses the system call cv_timedwait_sig to wait. Starting with Solaris 10 update 8 the system call cv_timedwait_sig uses the system call cv_timedwait_sig_hires to wait which then uses gethrtime() to schedule the wake up time. As this high resolution timer can jump on Solaris 10 running on top of a VMware ESX 3.5, VMware vSphere 4.x, VirtualBox virtual maschine with more than 1 CPU assigned, it can happen if the sleep exactly starts when such a jump occurs, that the end time is so far in the future, that it seems to system call is hanging for ever. As soon as one uses truss, pstack, jstack, ... to analyze the live process, the nanosleep call will wake up on a signal and continue.<br />To see if the high resolution timer is making jumps you can use the following small program gethrtime_test.c to test:<br /><pre><br />#include &lt;stdio.h&gt;<br />#include &lt;stdlib.h&gt;<br />#include &lt;sys/time.h&gt;<br /><br />int main(int argc, char** argv)<br />{<br />     hrtime_t start, end;<br />     long iters = 1000000000;<br />     hrtime_t delta;<br />     delta=10000000;<br /> <br />     if (argc &gt; 1) iters = atol(argv[1]);<br />     if (argc &gt; 2) delta = atol(argv[2]);<br />     printf(&quot;%ld iterations\n&quot;, iters);<br /> <br />     start = gethrtime();<br /> <br />     long i;<br />     for (i = 0; i &lt; iters; i++)<br />     {<br />         end = gethrtime();<br /> <br />         if ( start &gt; end || start+delta &lt; end)<br />             printf(&quot;%ld:\n  start %lld\n  end   %lld\n  diff  %lld\n&quot;,<br />                 i, start, end, (end-start));<br /> <br />         start=end;<br />     }<br />     exit(0);<br />}<br /></pre><br />Create the file called gethrtime_test.c and compile it<br /><pre><br />root@hostname# gcc -o gethrtime_test gethrtime_test.c<br /></pre><br />Then let it run.<br /><pre><br />root@hostname# ./gethrtime_test<br />1000000000 iterations<br /></pre><br />You should not see any message like<br /><pre><br />5750624:<br />  start 415551665195<br />  end   415150888326<br />  diff  -400776869<br />6387810:<br />  start 416494513021<br />  end   416509397658<br />  diff  14884637<br /></pre><br />Especially negative jumps should never happen.<br /><br />If you see negative jumps, you can set the following paramters in a VMware virtual machines vmx file and reboot to avoid this:<br /><br />VMware ESX 3.5<br /><pre><br />monitor_control.disable_tsc_offsetting=TRUE<br />monitor_control.disable_rdtscopt_bt=TRUE<br /></pre><br /><br />VMware vSphere 4.x<br /><pre><br />timeTracker.forceMonotonicTTAT=TRUE<br /></pre><br /><br />On VirtualBox I do not know any solution yet.]]></description>
	</item>
</rdf:RDF>
