Solaris 10 Ldap authentication with Windows AD server 

Introduction


Well documented is the User Management of Solaris using a Directory Server. Thisis documented in the Solaris manuals.
How to do a SSO using a Windows AD Server is also documented on various sites.
What is less obvious and not documented is how to just authenticate (check passwords) using ldap. Thats what should be documented in this post.

Setup



Bind User in Windows AD Server



First you need to create a bind user on the Window AD Server who has the rights to lookup all usernames, which should be able to authenticate.

Setup a user:

cn=unixbind,ou=USER system,ou=yyy,dc=xxx,dc=com

This user needs only to have read-only rights on the user objects. It is used to lookup the users cn using the sAMAccountName field to do the bind.

Import the CA-Cert into the certdb



If you are going to use ldaps you need the Certification authorithy certificate. This can be a root ca of you own organisation or a ca from an official authority like verisign, which was used to sign you certificate.

To find the authority cert use:


Armins-Air:Desktop ado$ openssl x509 -noout -text -in google.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 8057627504494046370 (0x6fd2718631c8eca2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=Google Trust Services, CN=Google Internet Authority G3
...
...

Here it would be "Google Internet Authority G3"

As this is an official Trust authority you can find the cert in your web browser and export it.

For Firefox: Preferences -> Privacy & Security -> Certificates -> View Certificates -> Authorities.

The cert found for Google Internet Authority G3 was:

Armins-Air:Desktop ado$ openssl x509 -noout -in GoogleInternetAuthorityG3.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
01:e3:a9:30:1c:fc:72:06:38:3f:9a:53:1d
Signature Algorithm: sha256WithRSAEncryption
Issuer: OU=GlobalSign Root CA - R2, O=GlobalSign, CN=GlobalSign


This cert was issued by GlobalSign Root CA - R2. So we get this cert from the browser too and then we check the trust chain using openssl:

We put the two certs in one file:

Armins-Air:Desktop ado$ cat GoogleInternetAuthorityG3.crt GlobalSignRootCA-R2.crt > ca.crt

and check then the cert of google.

Armins-Air:Desktop ado$ openssl verify -CAfile ca.crt google.crt
google.crt: OK



Then create a empty certdb:

/usr/sfw/bin/certutil -N -d /var/ldap

Then import the ca cert into the db:

/usr/sfw/bin/certutil -A -d /var/ldap -i ca.crt -n xxxx.com -t 'C,C,C'

The binary /usr/sfw/bin/certutil is installed by the package SUNWtlsu

Change the pam.conf



You can use ldap authentication for all logins. This is done using this "other section":

other auth requisite pam_authtok_get.so.1
other auth required pam_dhkeys.so.1
other auth required pam_unix_cred.so.1
other auth sufficient pam_ldap.so.1
other auth binding pam_unix_auth.so.1 server_policy

Or you can use it only for ssh:
/

sshd-kbdint auth requisite pam_authtok_get.so.1
sshd-kbdint auth required pam_dhkeys.so.1
sshd-kbdint auth required pam_unix_cred.so.1
sshd-kbdint auth binding pam_ldap.so.1
sshd-password auth requisite pam_authtok_get.so.1
sshd-password auth required pam_dhkeys.so.1
sshd-password auth required pam_unix_cred.so.1
sshd-password auth binding pam_ldap.so.1

Then on the console no ldap authentication is used.

Setup ldap client



ldapclient manual \
-a domainName=xxx.com \
-a credentialLevel=proxy \
-a authenticationMethod=tls:simple \
-a defaultSearchBase=dc=xxx,dc=com \
-a "proxyDN=cn=unixbind,ou=USER system,ou=LI,dc=xxx,dc=com" \
-a proxyPassword=wassimmer \
-a objectClassMap=passwd:posixaccount=user \
-a attributeMap=passwd:uid=sAMAccountName \
-a serviceSearchDescriptor=passwd:dc=xxx,dc=com?sub \
-a followReferrals=false \
-a preferredServerList="10.10.1.1"


This will create to files in /var/ldap :

/var/ldap/ldap_client_cred: Bind user with its encrypted password
/var/ldap/ldap_client_file: The configration of the ldap service


Start ldap Service


Now we just have to enable the ldap service:


svcadm enable ldap/client

check the ldap service now:

xxx@yyy> /usr/lib/ldap/ldap_cachemgr -g

cachemgr configuration:
server debug level 0
server log file "/var/ldap/cachemgr.log"
number of calls to ldapcachemgr 63

cachemgr cache data statistics:
Configuration refresh information:
Previous refresh time: 2018/07/06 19:46:58
Next refresh time: 2018/07/12 19:47:11
Server information:
Previous refresh time: 2018/07/12 07:47:11
Next refresh time: 2018/07/12 10:57:02
server: xx.yy.zz.66, xyz, status: UP
Cache data information:
Maximum cache entries: 256
Number of cache entries: 0



[ view entry ] ( 239 views )   |  permalink  |  print article  |   ( 3 / 572 )
Clone Pluggable Database using zfs clones 
Fast cloning using zfs snapshots and clones is very simple for pluggable databases.

In this example a pluggable database PDB1 is cloned to a pluggable database PDB4. All database files are located in /u01/PDB1 which is a zfs filesystem.

First you need to create an xml description of the pluggable database

SQL> alter pluggable database pdb1 close;
SQL> alter pluggable database pdb1 open read only;
SQL> alter session set container=PDB1;
SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u00/pdb1.xml');


Then you have to create the snapshot and clone filesystems:

oracle@devx02e:CDB1:/u00# /usr/sbin/zfs snapshot devx02e_data/u01/PDB1@PDB4
oracle@devx02e:CDB1:/u00# /usr/sbin/zfs clone devx02e_data/u01/PDB1@PDB4 devx02e_data/u01/PDB4


Then create the database using the following commands


SQL> create pluggable database pdb4 AS CLONE using '/u00/pdb1.xml' source_file_name_convert=('/u01/PDB1','/u01/PDB4') nocopy tempfile reuse;


Then you just need to open the pluggable databases:

SQL> alter pluggable database pdbspe1 open force;
SQL> alter pluggable database pdbspe4 open;


[ view entry ] ( 1374 views )   |  permalink  |  print article  |   ( 3 / 3318 )
ZPOOL hangs during rollback of a zfs snapshot 
Starting with Kernel Patch 150401-09 of Oracle we experienced hangs of the whole zpool when we did a rollback of a snapshot.
Up to now (2015/12/05) there is no fix available for this problem. Last tests with Kernel patch 150401-28 where not successful.
The hangs occur if we had to rollback a snapshot of cloned filesystem.
Here the setup of our filesystems:

testsystem_data 11.5T 2.22T 31K legacy
testsystem_data/u01 9.65T 2.22T 46K /zones/testsystem/root/u01
testsystem_data/u01/DB1 8.77T 2.22T 2.19T /zones/testsystem/root/u01/DB1
testsystem_data/u01/DB1@DB5 2.18T - 2.18T -
testsystem_data/u01/DB1@DB8 2.19T - 2.19T -
testsystem_data/u01/DB1@DB4 1.72G - 2.19T -
testsystem_data/u01/DB1@DB7 223M - 2.19T -
testsystem_data/u01/DB1@DB6 216M - 2.19T -
testsystem_data/u01/DB1@DB2 242M - 2.19T -
testsystem_data/u01/DB1@DB3 268M - 2.19T -
testsystem_data/u01/DB1@DB9 1.72G - 2.19T -
testsystem_data/u01/DB2 95.3G 2.22T 2.26T /zones/testsystem/root/u01/DB2
testsystem_data/u01/DB2@db2_after_clone 8.81G - 2.19T -
testsystem_data/u01/DB3 95.5G 2.22T 2.26T /zones/testsystem/root/u01/DB3
testsystem_data/u01/DB3@db3_after_clone 8.87G - 2.19T -
testsystem_data/u01/DB4 209G 2.22T 2.28T /zones/testsystem/root/u01/DB4
testsystem_data/u01/DB5 30.8G 2.22T 2.18T /zones/testsystem/root/u01/DB5
testsystem_data/u01/DB5@db5_after_clone 7.53G - 2.18T -
testsystem_data/u01/DB6 91.9G 2.22T 2.26T /zones/testsystem/root/u01/DB6
testsystem_data/u01/DB6@db6_after_clone 8.75G - 2.19T -
testsystem_data/u01/DB7 128G 2.22T 2.26T /zones/testsystem/root/u01/DB7
testsystem_data/u01/DB7@db7_after_clone 9.14G - 2.19T -
testsystem_data/u01/DB8 163G 2.22T 2.26T /zones/testsystem/root/u01/DB8
testsystem_data/u01/DB8@db8_after_clone 9.27G - 2.19T -
testsystem_data/u01/DB9 92.0G 2.22T 2.26T /zones/testsystem/root/u01/DB9
testsystem_data/u01/DB9@db9_after_clone 8.78G - 2.19T -

If we had to rollback a snapshot testsystem_data/u01/DB7@db7_after_clone we experienced long hang times. Sometimes the whole pool was blocked for several minutes.
The filesystem testsystem_data/u01/DB7 is a clone of the snapshot testsystem_data/u01/DB1@DB7.

After one year of testing all possible IDR Patches and Kernel Patches we found a simple workaround:

First delete all files in the Filesystem which you want to rollback. E.g. if you want to rollback testsystem_data/u01/DB7@db7_after_clone then first delete all files in /zones/testsystem/root/u01/DB7 and then run the rollback:

rm -r /zones/testsystem/root/u01/DB7
zfs rollback testsystem_data/u01/DB7@db7_after_clone

The "rm -r" command will take a while, depending on the size of the Filesystem (for 2TB about 40 Minutes), but then the "zfs rollback" will only take a few seconds, and the zpool will never hang during the whole procedure.

[ view entry ] ( 1571 views )   |  permalink  |  print article  |   ( 3 / 3087 )
Solaris rescan SCSI device on VMware 
If you want to add a SCSI device on a VMware virtual maschine which is running Solaris you normally just have to use the following command to see the new device
devfsadm

If you do not see the disk from Solaris then you just have to use the following commands:
root@xxxxx # cfgadm -al
Ap_Id Type Receptacle Occupant
Condition
c1 scsi-bus connected configured unknown
c1::dsk/c1t0d0 disk connected configured unknown
c1::dsk/c1t1d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c1::dsk/c1t3d0 disk connected configured unknown
pcie160 etherne/hp connected configured ok
..
..
pcie263 unknown empty unconfigured unknown
root@xxxxx # cfgadm -x reset_all c1
root@xxxxx # devfsadm
root@xxxxx # cfgadm -al
Ap_Id Type Receptacle Occupant
Condition
c1 scsi-bus connected configured unknown
c1::dsk/c1t0d0 disk connected configured unknown
c1::dsk/c1t1d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c1::dsk/c1t3d0 disk connected configured unknown
c1::dsk/c1t4d0 disk connected configured unknown
pcie160 etherne/hp connected configured ok
...



[ view entry ] ( 318 views )   |  permalink  |  print article  |   ( 3 / 3845 )
Solaris rescan SAN devices 
Rescan for new SAN devices can be done using the command

cfgadm -al

If a completely new storage system (e.g. Hitache Storage Systems) is added to the SAN, the storage may not be accessible even after using the command above. A relogin of each host bus adapter of the server in to the SAN is needed. Either you reboot the system or you use

luxadm -e forcelip /dev/cfg/cX; sleep 10 ; cfgadm -al

for each SAN adapter listed in the cfgadm -al output. Wait a few seconds between after each host bus adapter.

Or just do a relogin on each HBA port in use by the following commands

luxadm -e port |while read port rest ; do
luxadm -e forcelip $port
sleep 30
cfgadm -al
sleep 30
done
devfsadm


[ view entry ] ( 1830 views )   |  permalink  |  print article  |   ( 3.1 / 3898 )

<Back | 1 | 2 | 3 | Next> Last>>