Verify RHP-Server IO-Server and MGMTDB status on our Domain Services Cluster
[grid@dsctw21 ~]$ srvctl status rhpserver Rapid Home Provisioning Server is enabled Rapid Home Provisioning Server is running on node dsctw21 [grid@dsctw21 ~]$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node dsctw21 [grid@dsctw21 ~]$ srvctl status ioserver ASM I/O Server is running on dsctw21
Prepare RHP Server
DNS requirements for HAVIP IP address [grid@dsctw21 ~]$ nslookup rhpserver Server: 192.168.5.50 Address: 192.168.5.50#53 Name: rhpserver.example.com Address: 192.168.5.51 [grid@dsctw21 ~]$ nslookup 192.168.5.51 Server: 192.168.5.50 Address: 192.168.5.50#53 51.5.168.192.in-addr.arpa name = rhpserver.example.com. [grid@dsctw21 ~]$ ping nslookup rhpserver ping: nslookup: Name or service not known [grid@dsctw21 ~]$ ping rhpserver PING rhpserver.example.com (192.168.5.51) 56(84) bytes of data. From dsctw21.example.com (192.168.5.151) icmp_seq=1 Destination Host Unreachable From dsctw21.example.com (192.168.5.151) icmp_seq=2 Destination Host Unreachable -> nslookup works - Nobody should respond to our ping request as HAVIP is not active YET As user root create a HAVIP [root@dsctw21 ~]# srvctl add havip -id rhphavip -address rhpserver ***** Cluster Resources: ***** Resource NAME INST TARGET STATE SERVER STATE_DETAILS --------------------------- ---- ------------ ------------ --------------- ----------------------------------------- ora.rhphavip.havip 1 OFFLINE OFFLINE - STABLE
Create a Member Cluster Configuration Manifest
[grid@dsctw21 ~]$ crsctl create -h
Usage:
crsctl create policyset -file <filePath>
where
filePath Policy set file to create.
crsctl create member_cluster_configuration <member_cluster_name> -file <cluster_manifest_file>
-member_type <database|application>
[-version <member_cluster_version>] [-domain_services [asm_storage <local|direct|indirect>][<rhp>]] where member_cluster_name name of the new Member Cluster -file path of the Cluster Manifest File (including the '.xml' extension) to be created -member_type type of member cluster to be created -version 5 digit version of GI (example: 12.2.0.2.0) on the new Member Cluster, if different from the Domain Services Cluster -domain_services services to be initially configured for this member cluster (asm_storage with local, direct, or indirect access paths, and rhp) --note that if "-domain_services" option is not specified, then only the GIMR and TFA services will be configured asm_storage indicates the storage access path for the database member clusters local : storage is local to the cluster direct or indirect : direct or indirect
access to storage provided on the Domain Services Cluster rhp generate credentials and configuration for an RHP client cluster. Provide access to DSC Data DG - even we use: asm_storage local [grid@dsctw21 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP data SET ATTRIBUTE 'access_control.enabled' = 'true'; Diskgroup altered. Create a Member Cluster Configuration File with local ASM storage [grid@dsctw21 ~]$ crsctl create member_cluster_configuration mclu2 -file mclu2.xml -member_type database
-domain_services asm_storage indirect -------------------------------------------------------------------------------- ASM GIMR TFA ACFS RHP GNS ================================================================================ YES YES NO NO NO YES ================================================================================ If you get ORA-15365 during crsctl create member_cluster_configuration delete the configuration first Error ORA-15365: member cluster 'mclu2' already configured [grid@dsctw21 ~]$ crsctl delete member_cluster_configuration mclu2 [grid@dsctw21 ~]$ crsctl query member_cluster_configuration mclu2 mclu2 12.2.0.1.0 a6ab259d51ea6f91ffa7984299059208 ASM,GIMR Copy the File to the Member Cluster Host where you plan to start the installation [grid@dsctw21 ~]$ sum mclu2.xml 54062 22 Copy Member Cluster Manifest File to Member Cluster host [grid@dsctw21 ~]$ scp mclu2.xml mclu21: mclu2.xml 100% 25KB 24.7KB/s 00:00
Verify DSC SCAN Address from our Member Cluster Hosts
[grid@mclu21 grid]$ ping dsctw-scan.dsctw.dscgrid.example.com PING dsctw-scan.dsctw.dscgrid.example.com (192.168.5.232) 56(84) bytes of data. 64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=1 ttl=64 time=0.570 ms 64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=2 ttl=64 time=0.324 ms 64 bytes from 192.168.5.232 (192.168.5.232): icmp_seq=3 ttl=64 time=0.654 ms ^C --- dsctw-scan.dsctw.dscgrid.example.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.324/0.516/0.654/0.140 ms [root@mclu21 ~]# nslookup dsctw-scan.dsctw.dscgrid.example.com Server: 192.168.5.50 Address: 192.168.5.50#53 Non-authoritative answer: Name: dsctw-scan.dsctw.dscgrid.example.com Address: 192.168.5.230 Name: dsctw-scan.dsctw.dscgrid.example.com Address: 192.168.5.226 Name: dsctw-scan.dsctw.dscgrid.example.com Address: 192.168.5.227
Start Member Cluster installation
Unset the ORACLE_BASE environment variable. [grid@dsctw21 grid]$ unset ORACLE_BASE [grid@dsctw21 ~]$ cd $GRID_HOME [grid@dsctw21 grid]$ pwd /u01/app/122/grid [grid@dsctw21 grid]$ unzip -q /media/sf_kits/Oracle/122/linuxx64_12201_grid_home.zip [grid@mclu21 grid]$ gridSetup.sh Launching Oracle Grid Infrastructure Setup Wizard... -> Configure an Oracle Member Cluster for Oracle Database -> Member Cluster Manifest File : /home/grid/FILES/mclu2.xml During parsing the Member Cluster Manifest File following error pops up: [INS-30211] An unexpected exception occurred while extracting details from ASM client data PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml" PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml
- At your DSC: Add GNS client Data to Member Cluster Configuration File
[grid@dsctw21 ~]$ srvctl export gns -clientdata mclu2.xml -role CLIENT [grid@dsctw21 ~]$ scp mclu2.xml mclu21: mclu2.xml 100% 25KB 24.7KB/s 00:00
- Restart the Member Cluster Installation – should work NOW !
- Our Window 7 Host is busy and show high memory consumption
- The GIMR is the most challenging part for the Installation
Verify Member Cluster
Verify Member Cluster Resources Cluster Resources [root@mclu22 ~]# crs ***** Local Resources: ***** Rescource NAME TARGET STATE SERVER STATE_DETAILS ------------------------- ---------- ---------- ------------ ------------------ ora.LISTENER.lsnr ONLINE ONLINE mclu21 STABLE ora.LISTENER.lsnr ONLINE ONLINE mclu22 STABLE ora.net1.network ONLINE ONLINE mclu21 STABLE ora.net1.network ONLINE ONLINE mclu22 STABLE ora.ons ONLINE ONLINE mclu21 STABLE ora.ons ONLINE ONLINE mclu22 STABLE ***** Cluster Resources: ***** Resource NAME INST TARGET STATE SERVER STATE_DETAILS --------------------------- ---- ------------ ------------ --------------- -------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE mclu22 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE mclu21 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE mclu21 STABLE ora.cvu 1 ONLINE ONLINE mclu21 STABLE ora.mclu21.vip 1 ONLINE ONLINE mclu21 STABLE ora.mclu22.vip 1 ONLINE ONLINE mclu22 STABLE ora.qosmserver 1 ONLINE ONLINE mclu21 STABLE ora.scan1.vip 1 ONLINE ONLINE mclu22 STABLE ora.scan2.vip 1 ONLINE ONLINE mclu21 STABLE ora.scan3.vip 1 ONLINE ONLINE mclu21 STABLE [root@mclu22 ~]# srvctl config scan SCAN name: mclu2-scan.mclu2.dscgrid.example.com, Network: 1 Subnet IPv4: 192.168.5.0/255.255.255.0/enp0s8, dhcp Subnet IPv6: SCAN 1 IPv4 VIP: -/scan1-vip/192.168.5.202 SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN 2 IPv4 VIP: -/scan2-vip/192.168.5.231 SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN 3 IPv4 VIP: -/scan3-vip/192.168.5.232 SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: [root@mclu22 ~]# nslookup mclu2-scan.mclu2.dscgrid.example.com Server: 192.168.5.50 Address: 192.168.5.50#53 Non-authoritative answer: Name: mclu2-scan.mclu2.dscgrid.example.com Address: 192.168.5.232 Name: mclu2-scan.mclu2.dscgrid.example.com Address: 192.168.5.202 Name: mclu2-scan.mclu2.dscgrid.example.com Address: 192.168.5.231 [root@mclu22 ~]# ping mclu2-scan.mclu2.dscgrid.example.com PING mclu2-scan.mclu2.dscgrid.example.com (192.168.5.202) 56(84) bytes of data. 64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=1 ttl=64 time=0.067 ms 64 bytes from mclu22.example.com (192.168.5.202): icmp_seq=2 ttl=64 time=0.037 ms ^C --- mclu2-scan.mclu2.dscgrid.example.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.037/0.052/0.067/0.015 ms [grid@mclu21 ~]$ oclumon manage -get MASTER Master = mclu21 [grid@mclu21 ~]$ oclumon manage -get reppath CHM Repository Path = +MGMT/_MGMTDB/50472078CF4019AEE0539705A8C0D652/DATAFILE/sysmgmtdata.292.944846507 [grid@mclu21 ~]$ oclumon dumpnodeview -allnodes ---------------------------------------- Node: mclu21 Clock: '2017-05-24 17.51.50+0200' SerialNo:445 ---------------------------------------- SYSTEM: #pcpus: 1 #cores: 1 #vcpus: 1 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 46.68 cpusystem: 5.80 cpuuser: 40.87 cpunice: 0.00
cpuiowait: 0.00 cpusteal: 0.00 cpuq: 1 physmemfree: 1047400 physmemtotal: 7910784 mcache: 4806576 swapfree: 8257532 swaptotal: 8257532 hugepagetotal: 0
hugepagefree: 0 hugepagesize: 2048 ior: 0 iow: 41 ios: 10 swpin: 0 swpout: 0 pgin: 0 pgout: 20 netr: 81.940 netw: 85.211 procs: 248 procsoncpu: 1
#procs_blocked: 0 rtprocs: 7 rtprocsoncpu: N/A #fds: 10400 #sysfdlimit: 6815744 #disks: 5
#nics: 3 loadavg1: 6.92 loadavg5: 7.16 loadavg15: 5.56 nicErrors: 0
TOP CONSUMERS:
topcpu: 'gdb(20156) 31.19' topprivmem: 'gdb(20159) 353188' topshm: 'gdb(20159) 151624' topfd: 'crsd(21898) 274'
topthread: 'crsd(21898) 52'
....
[root@mclu22 ~]# tfactl print status
.-----------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+--------+---------------+------+------+------------+----------------------+--------------------+
| mclu22 | RUNNING | 2437 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE |
| mclu21 | RUNNING | 1209 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE |
'--------+---------------+------+------+------------+----------------------+--------------------'
Verify DSC status after Member Cluster Setup
SQL> @pdb_info.sql
SQL> /*
SQL> To connect to GIMR database set ORACLE_SID : export ORACLE_SID=\-MGMTDB
SQL> */
SQL>
SQL> set linesize 132
SQL> COLUMN NAME FORMAT A18
SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME CON_ID DBID CON_UID GUID
------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT 1 1149111082 1 4700AA69A9553E5FE05387E5E50AC8DA
PDB$SEED 2 949396570 949396570 50458CC0190428B2E0539705A8C047D8
GIMR_DSCREP_10 3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D
GIMR_CLUREP_20 4 2292678490 2292678490 50472078CF4019AEE0539705A8C0D652
--> Management Database hosts a new PDB named GIMR_CLUREP_20
SQL>
SQL> !asmcmd find /DATA/mclu2 \*
+DATA/mclu2/OCRFILE/
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929
+DATA/mclu2/VOTINGFILE/
+DATA/mclu2/VOTINGFILE/vfile.258.944845949
SQL> !asmcmd find \--type VOTINGFILE / \*
+DATA/mclu2/VOTINGFILE/vfile.258.944845949
SQL> !asmcmd find \--type OCRFILE / \*
+DATA/dsctw/OCRFILE/REGISTRY.255.944835699
+DATA/mclu2/OCRFILE/REGISTRY.257.944845929
SQL> ! crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6e59072e99f34f66bf750a5c8daf616f (AFD:DATA1) [DATA]
2. ONLINE ef0d610cb44d4f2cbf9d977090b88c2c (AFD:DATA2) [DATA]
3. ONLINE db3f3572250c4f74bf969c7dbaadfd00 (AFD:DATA3) [DATA]
Located 3 voting disk(s).
SQL> ! crsctl get cluster mode status
Cluster is running in "flex" mode
SQL> ! crsctl get cluster class
CRS-41008: Cluster class is 'Domain Services Cluster'
SQL> ! crsctl get cluster name
CRS-6724: Current cluster name is 'dsctw'
Potential Errors during Member Cluster Setup
1. Reading Member Cluster Configuration File fails with [INS-30211] An unexpected exception occurred while extracting details from ASM client data PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml" PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml KFOD-00319: No ASM instance available for OCI connection Fix : Add GNS client Data to Member Cluster Configuration File $ srvctl export gns -clientdata mclu2.xml -role CLIENT -> Fix confirmed 2. Reading Member Cluster Configuration File fails with [INS-30211] An unexpected exception occurred while extracting details from ASM client data PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml" PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml KFOD-00321: failed to read the credential file /home/grid/FILES/mclu2.xml -> Double check that the DSC ASM Configuration is working This error may be related to running [grid@dsctw21 grid]$ /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_dsctw2.rsp and not setting passwords in the related rsp File # Password for SYS user of Oracle ASM oracle.install.asm.SYSASMPassword=sys # Password for ASMSNMP account oracle.install.asm.monitorPassword=sys Fix: Add passwords before running -executeConfigTools step -> Fix NOT confirmed 3. Crashes due to limited memory in my Virtualbox env 32 GByte 3.1 Crash of DSC [ Virtualbox host freezes - could not track VM via top ] - A failed failed Cluster Member Setup due to memory shortage can kill your DSC GNS Note: This is a very dangerous situation as it kills your DSC env. As said always backup OCR and export GNS ! 3.2 Crash of any or all Member Cluster [ Virtualbox host freezes - could not track VM via top ] - GIMR database setup is partially installed but not working - Member cluster itself is working fine
Member Cluster Deinstall
On all Member Cluster Nodes but NOT the last one : [root@mclu21 grid]# $GRID_HOME/crs/install/rootcrs.sh -deconfig -force
On last Member Cluster Node: [root@mclu21 grid]# $GRID_HOME/crs/install/rootcrs.sh -deconfig -force -lastnode .. 2017/05/25 14:37:18 CLSRSC-559: Ensure that the GPnP profile data under the 'gpnp' directory in /u01/app/122/grid is deleted on each node before using the software in the current Grid Infrastructure home for reconfiguration. 2017/05/25 14:37:18 CLSRSC-590: Ensure that the configuration for this Storage Client (mclu2) is deleted by running the command 'crsctl delete member_cluster_configuration <member_cluster_name>' on the Storage Server. Delete Member Cluster mclu2 - Commands running on DSC [grid@dsctw21 ~]$ crsctl delete member_cluster_configuration mclu2 ASMCMD-9477: delete member cluster 'mclu2' failed KFOD-00327: failed to delete member cluster 'mclu2' ORA-15366: unable to delete configuration for member cluster 'mclu2' because the directory '+DATA/mclu2/VOTINGFILE' was not empty ORA-06512: at line 4 ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 724 ORA-06512: at line 2 ASMCMD> find mclu2/ * +DATA/mclu2/VOTINGFILE/ +DATA/mclu2/VOTINGFILE/vfile.258.944845949 ASMCMD> rm +DATA/mclu2/VOTINGFILE/vfile.258.94484594 SQL> @pdb_info NAME CON_ID DBID CON_UID GUID ------------------ ---------- ---------- ---------- -------------------------------- CDB$ROOT 1 1149111082 1 4700AA69A9553E5FE05387E5E50AC8DA PDB$SEED 2 949396570 949396570 50458CC0190428B2E0539705A8C047D8 GIMR_DSCREP_10 3 3606966590 3606966590 504599D57F9148C0E0539705A8C0AD8D -> GIMR_CLUREP_20 PDB was deleted ! [grid@dsctw21 ~]$ srvctl config gns -list dsctw21.CLSFRAMEdsctw SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 40020 Weight: 0 Priority: 0 Flags: 0x101 dsctw21.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 dsctw22.CLSFRAMEdsctw SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 58466 Weight: 0 Priority: 0 Flags: 0x101 dsctw22.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 mclu21.CLSFRAMEmclu2 SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 14064 Weight: 0 Priority: 0 Flags: 0x101 mclu21.CLSFRAMEmclu2 TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 dscgrid.example.com DLV 20682 10 18 ( XoH6wdB6FkuM3qxr/ofncb0kpYVCa+hTubyn5B4PNgJzWF4kmbvPdN2CkEcCRBxt10x/YV8MLXEe0 emM26OCAw== ) Unique Flags: 0x314 dscgrid.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAvu/8JsrxQAVTEPjq4+JfqPwewH/dc7Y/QbJfMp9wgIwRQMZyJSBSZSPdlqhw8fSGfNU mWJW8v+mJ4JsPmtFZRsUW4iB7XvO2SwnE
uDnk/8W3vN6sooTmH82x8QxkOVjzWfhqJPLkGs9NP4791JEs0wI/HnXBoR4Xv56mzaPhFZ6vM2aJGWG0N/1i67cMOKIDpw90JV4HZKcaWeMsr5 7tOWqEec5+dhIKf07DJlCqa4UU/oSHH865DBzp
qqEhfbGaUAiUeeJVVYVJrWFPhSttbxsdPdCcR9ulBLuR6PhekMj75wxiC8KUgAL7PUJjxkvyk3ugv5K73qkbPesNZf6pEQIDAQAB ) Unique Flags: 0x314 dscgrid.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314 dsctw-scan.dsctw A 192.168.5.226 Unique Flags: 0x81 dsctw-scan.dsctw A 192.168.5.235 Unique Flags: 0x81 dsctw-scan.dsctw A 192.168.5.238 Unique Flags: 0x81 dsctw-scan1-vip.dsctw A 192.168.5.238 Unique Flags: 0x81 dsctw-scan2-vip.dsctw A 192.168.5.235 Unique Flags: 0x81 dsctw-scan3-vip.dsctw A 192.168.5.226 Unique Flags: 0x81 dsctw21-vip.dsctw A 192.168.5.225 Unique Flags: 0x81 dsctw22-vip.dsctw A 192.168.5.241 Unique Flags: 0x81 dsctw-scan1-vip A 192.168.5.238 Unique Flags: 0x81 dsctw-scan2-vip A 192.168.5.235 Unique Flags: 0x81 dsctw-scan3-vip A 192.168.5.226 Unique Flags: 0x81 dsctw21-vip A 192.168.5.225 Unique Flags: 0x81 dsctw22-vip A 192.168.5.241 Unique Flags: 0x81 dsctw21.gipcdhaname SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 41795 Weight: 0 Priority: 0 Flags: 0x101 dsctw21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 dsctw22.gipcdhaname SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 61595 Weight: 0 Priority: 0 Flags: 0x101 dsctw22.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 mclu21.gipcdhaname SRV Target: 192.168.2.155.mclu2 Protocol: tcp Port: 31416 Weight: 0 Priority: 0 Flags: 0x101 mclu21.gipcdhaname TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw21.dsctw Protocol: tcp Port: 21099 Weight: 0 Priority: 0 Flags: 0x101 gpnpd h:dsctw21 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw21", pid="12420" Flags: 0x101 gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 SRV Target: dsctw22.dsctw Protocol: tcp Port: 60348 Weight: 0 Priority: 0 Flags: 0x101 gpnpd h:dsctw22 c:dsctw u:c5323627b2484f8fbf20e67a2c4624e1.gpnpa2c4624e1 TXT agent="gpnpd", cname="dsctw", guid="c5323627b2484f8fbf20e67a2c4624e1", host="dsctw22", pid="13141" Flags: 0x101 CSSHub1.hubCSS SRV Target: dsctw21.dsctw Protocol: gipc Port: 0 Weight: 0 Priority: 0 Flags: 0x101 CSSHub1.hubCSS TXT HOSTQUAL="dsctw" Flags: 0x101 Net-X-1.oraAsm SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101 Net-X-2.oraAsm SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 1526 Weight: 0 Priority: 0 Flags: 0x101 Oracle-GNS A 192.168.5.60 Unique Flags: 0x315 dsctw.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 14123 Weight: 0 Priority: 0 Flags: 0x315 dsctw.Oracle-GNS TXT CLUSTER_NAME="dsctw", CLUSTER_GUID="c5323627b2484f8fbf20e67a2c4624e1", NODE_NAME="dsctw21", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0", PROTOCOL_VERSION="0xc200000",
DOMAIN="dscgrid.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 39923 Weight: 0 Priority: 0 Flags: 0x315
--> Most GNS entries for our Member cluster were deleted
Re-Executing GRID setup fails with [FATAL] [INS-30024]
Re-Executing GRID setup fails with [FATAL] [INS-30024] After an unclean deinstallation gridSetup.sh fails with error [FATAL] [INS-30024] Instead of offering the option to install a NEW cluster the installer offers the GRID Upgrade option
Debugging with strace
[grid@dsctw21 grid]$ gridSetup.sh -silent -skipPrereqs -responseFile /home/grid/grid_dsctw2.rsp oracle.install.asm.SYSASMPassword=sys oracle.install.asm.monitorPassword=sys 2>llog2 Launching Oracle Grid Infrastructure Setup Wizard... [FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home. ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory. Using strace to trace system calls [grid@dsctw21 grid]$ strace -f gridSetup.sh -silent -skipPrereqs -responseFile /home/grid/grid_dsctw2.rsp oracle.install.asm.SYSASMPassword=sys oracle.install.asm.monitorPassword=sys 2>llog Launching Oracle Grid Infrastructure Setup Wizard... [FATAL] [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home (/u01/app/122/grid), is not a valid Oracle home. ACTION: Ensure that either there are no environment variables pointing to this invalid location or register the location as an Oracle home in the central inventory. Check Log File for failed open calls or for open calls which should fail in CLEAN Installation ENV grid@dsctw21 grid]$ grep open llog .. [pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93 [pid 11525] open("/etc/oracle/ocr.loc", O_RDONLY) = 93 --> It seems the installer is testing for files /etc/oracle/ocr.loc /etc/oracle/olr.loc whether its an upgrade or its a new installation. Fix : Rename ocr.log and olr.loc [root@dsctw21 ~]# mv /etc/oracle/ocr.loc /etc/oracle/ocr.loc_tbd [root@dsctw21 ~]# mv /etc/oracle/olr.loc /etc/oracle/olr.loc_tbd Now gridSetup.sh should start the installation process
Install 12.2 Oracle Domain Service Cluster with Virtualbox env
Overview Domain Service Cluster
-> From Cluster Domains ORACLE WHITE PAPER
Domain Services Cluster Key Facts
DSC:
The Domain Services Cluster is the heart of the Cluster Domain, as it is configured to provide the services that will
be utilized by the various Member Clusters within the
Cluster Domain. As per the name, it is a cluster
itself, thus providing the required high availability and scalability for the provisioned services.
GIMR :
The centralized GIMR is host to cluster health and diagnostic information for all the clusters in the Cluster Domain.
As such, it is accessed by the client applications
of the Autonomous Health Framework (AHF), the Trace
File Analyzer (TFA) facility and Rapid Home Provisioning (RHP) Server across the Cluster Domain.
Thus, it acts in support of the DSC’s role as the management hub.
IOServer [ promised with 12.1 - finally implemented with 12.2 ]
Configuring the Database Member Cluster to use an indirect I/O path to storage is simpler still, requiring no locally
configured shared storage, thus dramatically improving
the ease of deploying new clusters, and changing the shared storage for those clusters (adding disks to the storage is done at the DSC - an invisible operation to the Database Member Cluster). Instead, all database I/O operations are channeled through the IOServer processes on the DSC. From the database instances on the Member Cluster, the database’s data files are fully accessible
and seen as individual
files, exactly as they would be with locally attached shared storage.
The real difference is that the actual I/O operation is handed off to the IOServers on the DSC instead of being
processed locally on the nodes of
the Member Cluster. The major benefit of this approach is that new Database Member Clusters don’t need to be
configured with locally attached shared storage, making deployment simpler and easier
Rapid Home Provisioning Server
The Domain Services Cluster may also be configured to host a Rapid Home Provisioning (RHP) server. RHP is used
to manage the provisioning, patching and upgrading of the Oracle Database and GI software stacks
and any other critical software across the Member Clusters in the Cluster Domain. Through this service, the RHP
server would be used to maintain the currency of the installations on the Member
Clusters as RHP clients, thus simplifying and standardizing the deployments across the Cluster Domain.
The services available consist of
- Grid Infrastructure Management Repository ( GIMR)
- ASM Storage service
- IOServer service
-
Rapid Home Provisioning Server
Domain Service Cluster Resources
- If you think that 12.1.0.2 RAC Installation is a Resource Monster than your are completely wrong
- A 12.2 Domain Service Cluster installation will eaten up even much more resources
Memory Resource Calculation when trying to setup Domain Serivce cluster with 16 GByte Memory VM DSC System1 [ running GIMR Database ] : 7 GByte VM DSC System2 [ NOT running GIMR Database ] : 6 GByte VM NameServer : 1 GByte Window 7 Host : 2 GByte I really think we need 32 GByte Memory for running a Domain Service cluster ... But as s I'm waiting on a 16 GByte memory upgrade I will try to run the setup with 16 GByte memory. The major problem are the GIMR Database Memory Requirements [ see DomainServicesCluster_GIMR.dbc ] - sga_target : 4 GByte - pga_aggregate_target : 2 GByte This will kill my above 16 Gyte setup so I need to change DomainServicesCluster_GIMR.dbc.
Disk Requirements Shared Disks 03.05.2017 19:09 21.476.933.632 asm1_dsc_20G.vdi 03.05.2017 19:09 21.476.933.632 asm2_dsc_20G.vdi 03.05.2017 19:09 21.476.933.632 asm3_dsc_20G.vdi 03.05.2017 19:09 21.476.933.632 asm4_dsc_20G.vdi 03.05.2017 19:09 107.376.279.552 asm5_GIMR_100G.vdi 03.05.2017 19:09 107.376.279.552 asm6_GIMR_100G.vdi 03.05.2017 19:09 107.376.279.552 asm7_GIMR_100G.vdi Disk Group +DATA : 4 x 50 GByte Mode : Normal Disk Group +GIMR : 3 x 100 GByte : Mode : External : Space Required during Installation : 289 GByte : Space provided: 300 GByte 04.05.2017 08:48 22.338.863.104 dsctw21.vdi 03.05.2017 21:44 0 dsctw21_OBASE_120G.vdi 03.05.2017 18:03 <DIR> dsctw22 04.05.2017 08:48 15.861.809.152 dsctw22.vdi 03.05.2017 21:43 0 dsctw22_OBASE_120G.vdi per RAC VM : 50 GByte for OS, Swap, GRID Software installation : 120 GByte for ORACLE_BASE : Space Required for ORACLE_BASE during Installation : 102 GByte : Space provided: 120 GByte This translate to about 450 GByte Diskspace for installiong a Domain Service Cluster Note: -> Disk Space Resources may are quite huge for this type of installation -> For the GIMR disk group we need 300 GByte space with EXTERNAL redundancy
Network Requirements
GNS Entry
Name Server Entry for GNS
$ORIGIN ggrid.example.com.
@ IN NS ggns.ggrid.example.com. ; NS grid.example.com
IN NS ns1.example.com. ; NS example.com
ggns IN A 192.168.5.60 ; glue record
NOTE : For the GIMR disk group we need 300 GByte space wiht EXTERNAL redundancy
Cluvfy commands to verify our RAC VMs
- For OS setup please read
Install Oracle RAC 12.2 ( 12cR2 ) PM – Policy Managed
[grid@dsctw21 linuxx64_12201_grid_home]$ cd /media/sf_kits/Oracle/122/linuxx64_12201_grid_home [grid@dsctw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp admprv -n "ractw21,ractw22" -o user_equiv -verbose -fixup [grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh stage -pre crsinst -fixup -n dsctw21 [grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp gns -precrsinst -domain dsctw2.example.com -vip 192.168.5.60 -verbose [grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp gns -precrsinst -domain dsctww2.example.com -vip 192.168.5.60 -verbose [grid@dsctw21 linuxx64_12201_grid_home]$ runcluvfy.sh comp dns -server -domain ggrid.example.com -vipaddress 192.168.5.60/255.255.255.0/enp0s8 -verbose -method root -> The server command should block here [grid@dsctw21 linuxx64_12201_grid_home]$ ./runcluvfy.sh comp dns -client -domain dsctw2.example.com -vip 192.168.5.60 -method root -verbose -last -> The client command with -last should terminate the server too Only memory related errors like PRVF-7530 and DNS configuration check errors should be ignored if you run your VM with less that 8 GByte memory Verifying Physical Memory ...FAILED dsctw21: PRVF-7530 : Sufficient physical memory is not available on node "dsctw21" [Required physical memory = 8GB (8388608.0KB)] Task DNS configuration check - This task verifies if GNS subdomain delegation has been implemented in the DNS . This Warning could be ignored too as GNS is not running YET
Create ASM Disks
Create the ASM Disks for +DATA Disk Group holding OCR, Voting Disks M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm1_dsc_20G.vdi --size 20480 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 8c914ad2-30c0-4c4d-88e0-ff94aef761c8 M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm2_dsc_20G.vdi --size 20480 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 72791d07-9b21-41dd-8630-483902343e22 M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm3_dsc_20G.vdi --size 20480 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 7f5684e6-e4d2-47ab-8166-b259e3e626e5 M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\RACTW2\asm4_dsc_20G.vdi --size 20480 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 2c564704-46ad-4f37-921b-e56f0812c0bf M:\VM\DSCRACTW2>VBoxManage modifyhd asm1_dsc_20G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage modifyhd asm2_dsc_20G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage modifyhd asm3_dsc_20G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage modifyhd asm4_dsc_20G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3_dsc_20G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4_dsc_20G.vdi --mtype shareable Create and attach the GIMR Disk Group M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm5_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61 M:\VM\DSCRACTW2>VBoxManage modifyhd asm5_GIMR_100G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 5 --device 0 --type hdd --medium asm5_GIMR_100G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 5 --device 0 --type hdd --medium asm5_GIMR_100G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm6_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61 M:\VM\DSCRACTW2>VBoxManage modifyhd asm6_GIMR_100G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 6 --device 0 --type hdd --medium asm6_GIMR_100G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 6 --device 0 --type hdd --medium asm6_GIMR_100G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\asm7_GIMR_100G.vdi --size 102400 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 8604878c-8c73-421a-b758-4ef5bf0a3d61 M:\VM\DSCRACTW2>VBoxManage modifyhd asm7_GIMR_100G.vdi --type shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw21 --storagectl "SATA" --port 7 --device 0 --type hdd --medium asm7_GIMR_100G.vdi --mtype shareable M:\VM\DSCRACTW2>VBoxManage storageattach dsctw22 --storagectl "SATA" --port 7 --device 0 --type hdd --medium asm7_GIMR_100G.vdi --mtype shareable Create and Attach the ORACLE_BASE disks - each VM gets its own ORACLE_BASE disk M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\dsctw21_OBASE_120G.vdi --size 122800 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 35ab9546-2967-4f43-9a52-305906ff24e1 M:\VM\DSCRACTW2>VBoxManage createhd --filename M:\VM\DSCRACTW2\dsctw22_OBASE_120G.vdi --size 122800 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 32e1fcaa-9609-4027-968e-2d35d33584a8 M:\VM\DSCRACTW2> VBoxManage storageattach dsctw21 --storagectl "SATA" --port 8 --device 0 --type hdd --medium dsctw21_OBASE_120G.vdi M:\VM\DSCRACTW2> VBoxManage storageattach dsctw22 --storagectl "SATA" --port 8 --device 0 --type hdd --medium dsctw22_OBASE_120G.vdi You may use parted to configure and mount the Diskspace The Linux XFS file systems should NOW look like the following [root@dsctw21 app]# df / /u01 /u01/app/grid Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/ol_ractw21-root 15718400 9085996 6632404 58% / /dev/mapper/ol_ractw21-u01 15718400 7409732 8308668 48% /u01 /dev/sdi1 125683756 32928 125650828 1% /u01/app/grid
- See Chapter : Using parted to create a new ORACLE_BASE partition for a Domain Service Cluster in the following article
Disk protections for our ASM disks
- Disk label should be msdos
- To allow the installation process to pick up the disk set following protections
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
..
brw-rw----. 1 grid asmadmin 8, 16 May 5 08:21 /dev/sdb
brw-rw----. 1 grid asmadmin 8, 32 May 5 08:21 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 48 May 5 08:21 /dev/sdd
brw-rw----. 1 grid asmadmin 8, 64 May 5 08:21 /dev/sde
brw-rw----. 1 grid asmadmin 8, 80 May 5 08:21 /dev/sdf
brw-rw----. 1 grid asmadmin 8, 96 May 5 08:21 /dev/sdg
- If you need to recover from a failed Installation and disk are already labeled by AFD please read:
Start the installation process
Unset the ORACLE_BASE environment variable. [grid@dsctw21 grid]$ unset ORACLE_BASE [grid@dsctw21 ~]$ cd $GRID_HOME [grid@dsctw21 grid]$ pwd /u01/app/122/grid [grid@dsctw21 grid]$ unzip -q /media/sf_kits/Oracle/122/linuxx64_12201_grid_home.zip As root allow X-Windows app. to run on this node from any host [root@dsctw21 ~]# xhost + access control disabled, clients can connect from any host [grid@dsctw21 grid]$ export DISPLAY=:0.0 If your are running a test env with low memory resources [ <= 16 GByte ] don't forget to limit the GIMR memory requirements by reading: Start of GIMR database fails during 12.2 installation Now start the Oracle Grid Infrastructure installer by running the following command: [grid@dsctw21 grid]$ ./gridSetup.sh Launching Oracle Grid Infrastructure Setup Wizard...
Initial Installation Steps
Run requrired Server root scripts: [root@dsctw22 app]# /u01/app/oraInventory/orainstRoot.sh Running root.sh on first Rac Node: [root@dsctw21 ~]# /u01/app/122/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/122/grid ... Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/122/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/dsctw21/crsconfig/rootcrs_dsctw21_2017-05-04_12-22-04AM.log 2017/05/04 12:22:07 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2017/05/04 12:22:07 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2017/05/04 12:22:07 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2017/05/04 12:22:07 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. 2017/05/04 12:22:09 CLSRSC-363: User ignored prerequisites during installation 2017/05/04 12:22:09 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2017/05/04 12:22:11 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'. 2017/05/04 12:22:12 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2017/05/04 12:22:13 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. 2017/05/04 12:22:16 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2017/05/04 12:22:16 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2017/05/04 12:22:18 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2017/05/04 12:22:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2017/05/04 12:22:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'. 2017/05/04 12:22:20 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2017/05/04 12:22:21 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2017/05/04 12:22:23 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. 2017/05/04 12:22:24 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2017/05/04 12:22:28 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dsctw21' CRS-2673: Attempting to stop 'ora.ctssd' on 'dsctw21' .... CRS-2676: Start of 'ora.diskmon' on 'dsctw21' succeeded CRS-2676: Start of 'ora.cssd' on 'dsctw21' succeeded Disk label(s) created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM122337.log for details. Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM122337.log for details. 2017/05/04 12:24:28 CLSRSC-482: Running command: '/u01/app/122/grid/bin/ocrconfig -upgrade grid oinstall' CRS-2672: Attempting to start 'ora.crf' on 'dsctw21' CRS-2672: Attempting to start 'ora.storage' on 'dsctw21' CRS-2676: Start of 'ora.storage' on 'dsctw21' succeeded CRS-2676: Start of 'ora.crf' on 'dsctw21' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'dsctw21' CRS-2676: Start of 'ora.crsd' on 'dsctw21' succeeded CRS-4256: Updating the profile Successful addition of voting disk c397468902ba4f76bf99287b7e8b1e91. Successful addition of voting disk fbb3600816064f02bf3066783b703f6d. Successful addition of voting disk f5dec135cf474f56bf3a69bdba629daf. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE c397468902ba4f76bf99287b7e8b1e91 (AFD:DATA1) [DATA] 2. ONLINE fbb3600816064f02bf3066783b703f6d (AFD:DATA2) [DATA] 3. ONLINE f5dec135cf474f56bf3a69bdba629daf (AFD:DATA3) [DATA] Located 3 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dsctw21' CRS-2673: Attempting to stop 'ora.crsd' on 'dsctw21' ..' CRS-2677: Stop of 'ora.driver.afd' on 'dsctw21' succeeded CRS-2677: Stop of 'ora.gipcd' on 'dsctw21' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dsctw21' has completed CRS-4133: Oracle High Availability Services has been stopped. 2017/05/04 12:25:58 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources .. CRS-2676: Start of 'ora.crsd' on 'dsctw21' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: dsctw21 CRS-6016: Resource auto-start has completed for server dsctw21 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2017/05/04 12:28:36 CLSRSC-343: Successfully started Oracle Clusterware stack 2017/05/04 12:28:36 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. CRS-2672: Attempting to start 'ora.net1.network' on 'dsctw21' CRS-2676: Start of 'ora.net1.network' on 'dsctw21' succeeded .. CRS-2676: Start of 'ora.DATA.dg' on 'dsctw21' succeeded 2017/05/04 12:31:44 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. Disk label(s) created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170504PM123151.log for details. 2017/05/04 12:38:07 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Run root.sh on the second Node: [root@dsctw22 app]# /u01/app/122/grid/root.sh Performing root user operation. .. 2017/05/04 12:47:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. 2017/05/04 12:47:54 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. 2017/05/04 12:48:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded After all root scripts have been finished continue the installation process !
- After GIMR database was created and installation process runs a final cluvfy – hopefully successful verify your installation logs :
Install Logs Location
/u01/app/oraInventory/logs/GridSetupActions2017-05-05_02-24-23PM/gridSetupActions2017-05-05_02-24-23PM.log
Verify Domain Service Cluster setup using cluvfy
Verify Domain Service cluster setup using cluvfy [grid@dsctw21 ~]$ cluvfy stage -post crsinst -n dsctw21,dsctw22 Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast check ...PASSED Verifying ASM filter driver configuration consistency ...PASSED Verifying Time zone consistency ...PASSED Verifying Cluster Manager Integrity ...PASSED Verifying User Mask ...PASSED Verifying Cluster Integrity ...PASSED Verifying OCR Integrity ...PASSED Verifying CRS Integrity ... Verifying Clusterware Version Consistency ...PASSED Verifying CRS Integrity ...PASSED Verifying Node Application Existence ...PASSED Verifying Single Client Access Name (SCAN) ... Verifying DNS/NIS name service 'dsctw2-scan.dsctw2.dsctw2.example.com' ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service 'dsctw2-scan.dsctw2.dsctw2.example.com' ...PASSED Verifying Single Client Access Name (SCAN) ...PASSED Verifying OLR Integrity ...PASSED Verifying GNS Integrity ... Verifying subdomain is a valid name ...PASSED Verifying GNS VIP belongs to the public network ...PASSED Verifying GNS VIP is a valid address ...PASSED Verifying name resolution for GNS sub domain qualified names ...PASSED Verifying GNS resource ...PASSED Verifying GNS VIP resource ...PASSED Verifying GNS Integrity ...PASSED Verifying Voting Disk ...PASSED Verifying ASM Integrity ... Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "192.168.2.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.5.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ...PASSED Verifying ASM disk group free space ...PASSED Verifying I/O scheduler ... Verifying Package: cvuqdisk-1.0.10-1 ...PASSED Verifying I/O scheduler ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Clock Synchronization ... CTSS is in Observer state. Switching over to clock synchronization checks using NTP Verifying Network Time Protocol (NTP) ... Verifying '/etc/chrony.conf' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Daemon 'chronyd' ...PASSED Verifying NTP daemon or service using UDP port 123 ...PASSED Verifying chrony daemon is synchronized with at least one external time source ...PASSED Verifying Network Time Protocol (NTP) ...PASSED Verifying Clock Synchronization ...PASSED Verifying Network configuration consistency checks ...PASSED Verifying File system mount options for path GI_HOME ...PASSED Post-check for cluster services setup was successful. CVU operation performed: stage -post crsinst Date: May 7, 2017 10:10:04 AM CVU home: /u01/app/122/grid/ User: grid
Check cluster Resources used by DSC
[root@dsctw21 ~]# crs ***** Local Resources: ***** Rescource NAME TARGET STATE SERVER STATE_DETAILS ------------------------- ---------- ---------- ------------ ------------------ ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE dsctw21 STABLE ora.DATA.dg ONLINE ONLINE dsctw21 STABLE ora.LISTENER.lsnr ONLINE ONLINE dsctw21 STABLE ora.MGMT.GHCHKPT.advm ONLINE ONLINE dsctw21 STABLE ora.MGMT.dg ONLINE ONLINE dsctw21 STABLE ora.chad ONLINE ONLINE dsctw21 STABLE ora.helper ONLINE ONLINE dsctw21 IDLE,STABLE ora.mgmt.ghchkpt.acfs ONLINE ONLINE dsctw21 mounted on /mnt/oracle/rhpimages/chkbase,STABLE ora.net1.network ONLINE ONLINE dsctw21 STABLE ora.ons ONLINE ONLINE dsctw21 STABLE ora.proxy_advm ONLINE ONLINE dsctw21 STABLE ***** Cluster Resources: ***** Resource NAME INST TARGET STATE SERVER STATE_DETAILS --------------------------- ---- ------------ ------------ --------------- ----------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE dsctw21 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE dsctw21 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE dsctw21 STABLE ora.MGMTLSNR 1 ONLINE ONLINE dsctw21 169.254.108.231 192. 168.2.151,STABLE ora.asm 1 ONLINE ONLINE dsctw21 Started,STABLE ora.asm 2 ONLINE OFFLINE - STABLE ora.asm 3 OFFLINE OFFLINE - STABLE ora.cvu 1 ONLINE ONLINE dsctw21 STABLE ora.dsctw21.vip 1 ONLINE ONLINE dsctw21 STABLE ora.dsctw22.vip 1 ONLINE INTERMEDIATE dsctw21 FAILED OVER,STABLE ora.gns 1 ONLINE ONLINE dsctw21 STABLE ora.gns.vip 1 ONLINE ONLINE dsctw21 STABLE ora.ioserver 1 ONLINE OFFLINE - STABLE ora.ioserver 2 ONLINE ONLINE dsctw21 STABLE ora.ioserver 3 ONLINE OFFLINE - STABLE ora.mgmtdb 1 ONLINE ONLINE dsctw21 Open,STABLE ora.qosmserver 1 ONLINE ONLINE dsctw21 STABLE ora.rhpserver 1 ONLINE ONLINE dsctw21 STABLE ora.scan1.vip 1 ONLINE ONLINE dsctw21 STABLE ora.scan2.vip 1 ONLINE ONLINE dsctw21 STABLE ora.scan3.vip 1 ONLINE ONLINE dsctw21 STABLE
Following Resouces should be ONLINE for a DSC cluster
-> ioserver -> mgmtdb -> rhpserver
- If any of these resources are not ONLINE try to start them with: $srvctl start
Verify Domain Service cluster setup using srvclt,rhpctl,asmcmd
[grid@dsctw21 peer]$ rhpctl query server Rapid Home Provisioning Server (RHPS): dsctw2 Storage base path: /mnt/oracle/rhpimages Disk Groups: MGMT Port number: 23795 [grid@dsctw21 peer]$ rhpctl quey workingcopy No software home has been configured [grid@dsctw21 peer]$ rhpctl query image No image has been configured Check ASM disk groups [grid@dsctw21 peer]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED NORMAL N 512 512 4096 4194304 81920 81028 20480 30274 0 Y DATA/ MOUNTED EXTERN N 512 512 4096 4194304 307200 265376 0 265376 0 N MGMT/ Verify GNS [grid@dsctw21 peer]$ srvctl config gns GNS is enabled. GNS VIP addresses: 192.168.5.60 Domain served by GNS: dsctw2.example.com [grid@dsctw21 peer]$ srvctl config gns -list dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314 dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzw
jdTI4zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNd
qM7HB3OVV5L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314 dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314 dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1 dsctw2-scan.dsctw2 A 192.168.5.234 Unique Flags: 0x1 dsctw2-scan.dsctw2 A 192.168.5.235 Unique Flags: 0x1 dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1 dsctw2-scan2-vip.dsctw2 A 192.168.5.235 Unique Flags: 0x1 dsctw2-scan3-vip.dsctw2 A 192.168.5.234 Unique Flags: 0x1 [grid@dsctw21 peer]$ nslookup dsctw2-scan.dsctw2.example.com Server: 192.168.5.50 Address: 192.168.5.50#53 Non-authoritative answer: Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.234 Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.231 Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.235 Verify Management Repository [grid@dsctw21 peer]$ oclumon manage -get MASTER Master = dsctw21 [grid@dsctw21 peer]$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node dsctw21 [grid@dsctw21 peer]$ srvctl config mgmtdb Database unique name: _mgmtdb Database name: Oracle home: <CRS home> Oracle user: grid Spfile: +MGMT/_MGMTDB/PARAMETERFILE/spfile.272.943198901 Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Type: Management PDB name: GIMR_DSCREP_10 PDB service: GIMR_DSCREP_10 Cluster name: dsctw2 Database instance: -MGMTDB --> PDB Name and Serive Name GIMR_DSCREP_10 is NEW with 12.2 With lower versions get the cluster name here ! [grid@dsctw21 peer]$ oclumon manage -get reppath CHM Repository Path = +MGMT/_MGMTDB/4EC81829D5715AD0E0539705A8C084C6/DATAFILE/sysmgmtdata.280.943199159 [grid@dsctw21 peer]$ asmcmd ls -ls +MGMT/_MGMTDB/4EC81829D5715AD0E0539705A8C084C6/DATAFILE/sysmgmtdata.280.943199159 Type Redund Striped Time Sys Block_Size Blocks Bytes Space Name DATAFILE UNPROT COARSE MAY 05 17:00:00 Y 8192 262145 2147491840 2155872256 sysmgmtdata.280.943199159 [grid@dsctw21 peer]$ oclumon dumpnodeview -allnodes ---------------------------------------- Node: dsctw21 Clock: '2017-05-06 09.29.55+0200' SerialNo:4469 ---------------------------------------- SYSTEM: #pcpus: 1 #cores: 4 #vcpus: 4 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 26.48 cpusystem: 2.78 cpuuser: 23.70 cpunice: 0.00 cpuiowait: 0.05
cpusteal: 0.00 cpuq: 0 physmemfree: 695636 physmemtotal: 6708204 mcache: 2800060 swapfree: 7202032 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048
ior: 311 iow: 229 ios: 92 swpin: 0 swpout: 0 pgin: 3 pgout: 40 netr: 32.601 netw: 27.318 procs: 479 procsoncpu: 3 #procs_blocked: 0 rtprocs: 17 rtprocsoncpu: N/A #fds: 34496
#sysfdlimit: 6815744 #disks: 14 #nics: 3 loadavg1: 2.24 loadavg5: 1.99 loadavg15: 1.89 nicErrors: 0 TOP CONSUMERS: topcpu: 'gnome-shell(6512) 5.00' topprivmem: 'java(660) 347292' topshm: 'mdb_dbw0_-MGMTDB(28946) 352344' topfd: 'ocssd.bin(6204) 370' topthread: 'crsd.bin(8615) 52' ---------------------------------------- Node: dsctw22 Clock: '2017-05-06 09.29.55+0200' SerialNo:3612 ---------------------------------------- SYSTEM: #pcpus: 1 #cores: 4 #vcpus: 4 cpuht: N chipname: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz cpuusage: 1.70 cpusystem: 0.77 cpuuser: 0.92 cpunice: 0.00 cpuiowait: 0.00
cpusteal: 0.00 cpuq: 0 physmemfree: 828740 physmemtotal: 5700592 mcache: 2588336 swapfree: 8244596 swaptotal: 8257532 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048
ior: 2 iow: 68 ios: 19 swpin: 0 swpout: 0 pgin: 0 pgout: 63 netr: 10.747 netw: 18.222 procs: 376 procsoncpu: 1 #procs_blocked: 0 rtprocs: 15 rtprocsoncpu: N/A #fds: 29120
#sysfdlimit: 6815744 #disks: 14 #nics: 3 loadavg1: 1.44 loadavg5: 1.39 loadavg15: 1.43 nicErrors: 0 TOP CONSUMERS: topcpu: 'orarootagent.bi(7345) 1.20' topprivmem: 'java(8936) 270140' topshm: 'ocssd.bin(5833) 119060' topfd: 'gnsd.bin(9072) 1242' topthread: 'crsd.bin(7137) 49' Verify TFA status [grid@dsctw21 peer]$ tfactl print status TFA-00099: Printing status of TFA .-----------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +---------+---------------+-------+------+------------+----------------------+------------------+ | dsctw21 | RUNNING | 32084 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE | | dsctw22 | RUNNING | 3929 | 5000 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE | '---------+---------------+-------+------+------------+----------------------+------------------' [grid@dsctw21 peer]$ tfactl print config .------------------------------------------------------------------------------------. | dsctw21 | +-----------------------------------------------------------------------+------------+ | Configuration Parameter | Value | +-----------------------------------------------------------------------+------------+ | TFA Version | 12.2.1.0.0 | | Java Version | 1.8 | | Public IP Network | true | | Automatic Diagnostic Collection | true | | Alert Log Scan | true | | Disk Usage Monitor | true | | Managelogs Auto Purge | false | | Trimming of files during diagcollection | true | | Inventory Trace level | 1 | | Collection Trace level | 1 | | Scan Trace level | 1 | | Other Trace level | 1 | | Repository current size (MB) | 13 | | Repository maximum size (MB) | 10240 | | Max Size of TFA Log (MB) | 50 | | Max Number of TFA Logs | 10 | | Max Size of Core File (MB) | 20 | | Max Collection Size of Core Files (MB) | 200 | | Minimum Free Space to enable Alert Log Scan (MB) | 500 | | Time interval between consecutive Disk Usage Snapshot(minutes) | 60 | | Time interval between consecutive Managelogs Auto Purge(minutes) | 60 | | Logs older than the time period will be auto purged(days[d]|hours[h]) | 30d | | Automatic Purging | true | | Age of Purging Collections (Hours) | 12 | | TFA IPS Pool Size | 5 | '-----------------------------------------------------------------------+------------' .------------------------------------------------------------------------------------. | dsctw22 | +-----------------------------------------------------------------------+------------+ | Configuration Parameter | Value | +-----------------------------------------------------------------------+------------+ | TFA Version | 12.2.1.0.0 | | Java Version | 1.8 | | Public IP Network | true | | Automatic Diagnostic Collection | true | | Alert Log Scan | true | | Disk Usage Monitor | true | | Managelogs Auto Purge | false | | Trimming of files during diagcollection | true | | Inventory Trace level | 1 | | Collection Trace level | 1 | | Scan Trace level | 1 | | Other Trace level | 1 | | Repository current size (MB) | 0 | | Repository maximum size (MB) | 10240 | | Max Size of TFA Log (MB) | 50 | | Max Number of TFA Logs | 10 | | Max Size of Core File (MB) | 20 | | Max Collection Size of Core Files (MB) | 200 | | Minimum Free Space to enable Alert Log Scan (MB) | 500 | | Time interval between consecutive Disk Usage Snapshot(minutes) | 60 | | Time interval between consecutive Managelogs Auto Purge(minutes) | 60 | | Logs older than the time period will be auto purged(days[d]|hours[h]) | 30d | | Automatic Purging | true | | Age of Purging Collections (Hours) | 12 | | TFA IPS Pool Size | 5 | '-----------------------------------------------------------------------+------------' [grid@dsctw21 peer]$ tfactl print actions .-----------------------------------------------------------. | HOST | START TIME | END TIME | ACTION | STATUS | COMMENTS | +------+------------+----------+--------+--------+----------+ '------+------------+----------+--------+--------+----------' [grid@dsctw21 peer]$ tfactl print errors Total Errors found in database: 0 DONE [grid@dsctw21 peer]$ tfactl print startups ++++++ Startup Start +++++ Event Id : nullfom14v2mu0u82nkf5uufjoiuia File Name : /u01/app/grid/diag/apx/+apx/+APX1/trace/alert_+APX1.log Startup Time : Fri May 05 15:07:03 CEST 2017 Dummy : FALSE ++++++ Startup End +++++ ++++++ Startup Start +++++ Event Id : nullgp6ei43ke5qeqo8ugemsdqrle1 File Name : /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log Startup Time : Fri May 05 14:58:28 CEST 2017 Dummy : FALSE ++++++ Startup End +++++ ++++++ Startup Start +++++ Event Id : nullt7p1681pjq48qt17p4f8odrrgf File Name : /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/trace/alert_-MGMTDB.log Startup Time : Fri May 05 15:27:13 CEST 2017 Dummy : FALSE
Potential Error: ORA-845 starting IOServer Instances
[grid@dsctw21 ~]$ srvctl start ioserver PRCR-1079 : Failed to start resource ora.ioserver CRS-5017: The resource action "ora.ioserver start" encountered the following error: ORA-00845: MEMORY_TARGET not supported on this system . For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/dsctw22/crs/trace/crsd_oraagent_grid.trc". CRS-2674: Start of 'ora.ioserver' on 'dsctw22' failed CRS-5017: The resource action "ora.ioserver start" encountered the following error: ORA-00845: MEMORY_TARGET not supported on this system . For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/dsctw21/crs/trace/crsd_oraagent_grid.trc". CRS-2674: Start of 'ora.ioserver' on 'dsctw21' failed CRS-2632: There are no more servers to try to place resource 'ora.ioserver' on that would satisfy its placement policy From +IOS1 alert.log : ./diag/ios/+ios/+IOS1/trace/alert_+IOS1.log WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 4513071104 bytes.
/dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected.
Current available is 2117439488 and used is 1317158912 bytes. Ensure that the mount point is /dev/shm for this directory. Verify /dev/shm [root@dsctw22 ~]# df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 2.8G 1.3G 1.5G 46% /dev/shm Modify /etc/fstab # /etc/fstab # Created by anaconda on Tue Apr 4 12:13:16 2017 # # tmpfs /dev/shm tmpfs defaults,size=6g 0 0 and increase /dev/shm to 6 GByte. Remount tmpfs [root@dsctw22 ~]# mount -o remount tmpfs [root@dsctw22 ~]# df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 6.0G 1.3G 4.8G 21% /dev/shm
Do a silent installation
From Grid Infrastructure Installation and Upgrade Guide A.7.2 Running Postinstallation Configuration Using Response File Complete this procedure to run configuration assistants with the executeConfigTools command. Edit the response file and specify the required passwords for your configuration. You can use the response file created during installation, located at $ORACLE_HOME/install/response/product_timestamp.rsp. [root@dsctw21 ~]# ls -l $ORACLE_HOME/install/response/ total 112 -rw-r--r--. 1 grid oinstall 34357 Jan 26 17:10 grid_2017-01-26_04-10-28PM.rsp -rw-r--r--. 1 grid oinstall 35599 May 23 15:50 grid_2017-05-22_04-51-05PM.rsp Verify that Password Settings For Oracle Grid Infrastructure: [root@dsctw21 ~]# cd $ORACLE_HOME/install/response/ [root@dsctw21 response]# grep -i passw grid_2017-05-22_04-51-05PM.rsp # Password for SYS user of Oracle ASM oracle.install.asm.SYSASMPassword=sys # Password for ASMSNMP account oracle.install.asm.monitorPassword=sys I have not verified this but its seems that not setting passwords could lead to following errors during Member Cluster Setup : [INS-30211] An unexpected exception occurred while extracting details from ASM client data PRCI-1167 : failed to extract atttributes from the specified file "/home/grid/FILES/mclu2.xml" PRCT-1453 : failed to get ASM properties from ASM client data file /home/grid/FILES/mclu2.xml KFOD-00319: failed to read the credential file /home/grid/FILES/mclu2.xml [grid@dsctw21 grid]$ gridSetup.sh -silent -skipPrereqs -responseFile grid_2017-05-22_04-51-05PM.rsp Launching Oracle Grid Infrastructure Setup Wizard... .. You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2017-05-20_12-17-29PM/gridSetupActions2017-05-20_12-17-29PM.log As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/122/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [dsctw22] Execute /u01/app/122/grid/root.sh on the following nodes: [dsctw21, dsctw22] Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes. Successfully Setup Software. As install user, execute the following command to complete the configuration. /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_dsctw2.rsp [-silent] -> Run root.sh scripts [grid@dsctw21 grid]$ /u01/app/122/grid/gridSetup.sh -executeConfigTools -responseFile grid_2017-05-22_04-51-05PM.rsp Launching Oracle Grid Infrastructure Setup Wizard... You can find the logs of this session at: /u01/app/oraInventory/logs/GridSetupActions2017-05-20_05-34-08PM
Backup OCR and export GNS
- Note as Member cluster install has killed my shared GNS 2x is may be a good idea to backup OCR and export GNS right NOW
Backup OCR [root@dsctw21 cfgtoollogs]# ocrconfig -manualbackup dsctw21 2017/05/22 19:03:53 +MGMT:/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833 0 [root@dsctw21 cfgtoollogs]# ocrconfig -showbackup PROT-24: Auto backups for the Oracle Cluster Registry are not available dsctw21 2017/05/22 19:03:53 +MGMT:/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833 0 Locate all OCR backups ASMCMD> find --type OCRBACKUP / * +MGMT/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833 ASMCMD> ls -l +MGMT/dsctw/OCRBACKUP/backup_20170522_190353.ocr.284.944679833 Type Redund Striped Time Sys Name OCRBACKUP UNPROT COARSE MAY 22 19:00:00 Y backup_20170522_190353.ocr.284.944679833 Export the GNS to a file [root@dsctw21 cfgtoollogs]# srvctl stop gns [root@dsctw21 cfgtoollogs]# srvctl export gns -instance /root/dsc-gns.export [root@dsctw21 cfgtoollogs]# srvctl start gns Dump GNS data [root@dsctw21 cfgtoollogs]# srvctl export gns -instance /root/dsc-gns.export [root@dsctw21 cfgtoollogs]# srvctl start gns [root@dsctw21 cfgtoollogs]# srvctl config gns -list dsctw21.CLSFRAMEdsctw SRV Target: 192.168.2.151.dsctw Protocol: tcp Port: 12642 Weight: 0 Priority: 0 Flags: 0x101 dsctw21.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 dsctw22.CLSFRAMEdsctw SRV Target: 192.168.2.152.dsctw Protocol: tcp Port: 35675 Weight: 0 Priority: 0 Flags: 0x101 dsctw22.CLSFRAMEdsctw TXT NODE_ROLE="HUB", NODE_INCARNATION="0", NODE_TYPE="20" Flags: 0x101 dscgrid.example.com DLV 35418 10 18 ( /a+Iu8QgPs9k96CoQ6rFVQrqmGFzZZNKRo952Ujjkj8dcDlHSA+JMcEMHLC3niuYrM/eFeAj3iFpihrIEohHXQ== ) Unique Flags: 0x314 dscgrid.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAxnVyA60TYUeEKkNvEaWrAFg2oDXrFbR9Klx7M5N/UJadFtF8h1e32Bf8jpL6cq1yKRI3TVdrneuiag0OiQfzAycLjk98VUz+L3Q5
AHGYCta62Kjaq4hZOFcgF/BCmyY+6tWMBE8wdivv3CttCiH1U7x3FUqbgCb1iq3vMcS6X64k3MduhRankFmfs7zkrRuWJhXHfRaDz0mNXREeW2VvPyThXPs+EOPehaDhXRmJBWjBkeZNIaBTiR8j
KTTY1bSPzqErEqAYoH2lR4rAg9TVKjOkdGrAmJJ6AGvEBfalzo4CJtphAmygFd+/ItFm5koFb2ucFr1slTZz1HwlfdRVGwIDAQAB ) Unique Flags: 0x314 dscgrid.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314 dsctw-scan.dsctw A 192.168.5.225 Unique Flags: 0x81 dsctw-scan.dsctw A 192.168.5.227 Unique Flags: 0x81 dsctw-scan.dsctw A 192.168.5.232 Unique Flags: 0x81 dsctw-scan1-vip.dsctw A 192.168.5.232 Unique Flags: 0x81 dsctw-scan2-vip.dsctw A 192.168.5.227 Unique Flags: 0x81 dsctw-scan3-vip.dsctw A 192.168.5.225 Unique Flags: 0x81 dsctw21-vip.dsctw A 192.168.5.226 Unique Flags: 0x81 dsctw22-vip.dsctw A 192.168.5.235 Unique Flags: 0x81 dsctw-scan1-vip A 192.168.5.232 Unique Flags: 0x81 dsctw-scan2-vip A 192.168.5.227 Unique Flags: 0x81 dsctw-scan3-vip A 192.168.5.225 Unique Flags: 0x81 dsctw21-vip A 192.168.5.226 Unique Flags: 0x81 dsctw22-vip A 192.168.5.235 Unique Flags: 0x81
Start of GIMR database fails during 12.2 installation
Status of a failed 12.2 GIMR startup
- You’re starting a 12.2 RAC Database Installation either as
- Standalone RAC cluster or
- Domain Service Cluster
- Your memory Capacity Planning looks like the following
Memory Resources when trying to setup Domain Service cluster with 16 GByte Memory VM DSC System1 [ running GIMR Database ] : 7 GByte VM DSC System2 [ NOT running GIMR Database ] : 6 GByte VM NameServer : 1 GByte Window 7 Host : 2 GByte
- You’re installing the RAC envronment in a Virtualbox env and you have only 16 GByte memory
- The installation works fine until start of the GIMR databae [near at end of the GRID installation process ! ]
- Your Virtualbox WindowsVMs freeze or any the RAC VMs reboots
- Sometimes the start of the GIMR database during the Installation process fails with ORA-3113 or timeout Errors starting a database process like DWB, ..
- Check the related log File in /u01/app/grid/cfgtoollogs/dbca/_mgmtdb/
Typical Error when starting GIMR db during the 12.2 installation
[WARNING] [DBT-11209] Current available physical memory is less than the required physical memory (6,144MB) for creating the database. Registering database with Oracle Grid Infrastructure Copying database files Creating and starting Oracle instance DBCA Operation failed. Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details. Creating Container Database for Oracle Grid Infrastructure Management Repository failed. -> /u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log Registering database with Oracle Grid Infrastructure Copying database files .... DBCA_PROGRESS : 22% DBCA_PROGRESS : 36% [ 2017-05-04 18:01:38.759 CEST ] Creating and starting Oracle instance DBCA_PROGRESS : 37% [ 2017-05-04 18:07:20.869 CEST ] ORA-03113: end-of-file on communication channel [ 2017-05-04 18:07:55.288 CEST ] ORA-03113: end-of-file on communication channel [ 2017-05-04 18:11:07.145 CEST ] DBCA_PROGRESS : DBCA Operation failed.
Analyzing the problem of a failed GIMR db startup
- During startup the shared memory segment created by GIMR db is about 4 GByte
- If we are short in memory the shmget system call may fail [ shmget(key, SHM_SIZE, 0644 | IPC_CREAT) system ] will kill the installation process by :
- rebooting any of your RAC VM
- freeze your VirtualBox host
- freeze all the the 3 RAC VMs
In any case the installation process gets terminated and you need to cleanup your system and restart the installation process which will fail again untill you don’t add some memory.
Potential Workaround by modifying DomainServicesCluster_GIMR.dbc
After you have extracted the GRID zip file save the DomainServicesCluster_GIMR.dbc and change the following setting [ red : original - blue marks new setting ] Save and Modify DomainServicesCluster_GIMR.dbc [grod@dsctw21 FILES]# diff DomainServicesCluster_GIMR.dbc DomainServicesCluster_GIMR.dbc_ORIG < <initParam name="sga_target" value="800" unit="MB"/> > <initParam name="sga_target" value="4" unit="GB"/> < <initParam name="processes" value="200"/> > <initParam name="processes" value="2000"/> < <initParam name="open_cursors" value="300"/> < <initParam name="pga_aggregate_target" value="400" unit="MB"/> < <initParam name="target_pdbs" value="2"/> > <initParam name="open_cursors" value="600"/> > <initParam name="pga_aggregate_target" value="2" unit="GB"/> > <initParam name="target_pdbs" value="5"/> Copy the modified DomainServicesCluster_GIMR.dbc to its default locaction [grid@dsctw21 grid]$ cp /root/FILES/DomainServicesCluster_GIMR.dbc $GRID_HOME/assistants/dbca/templates [grid@dsctw21 grid]$ ls -l $GRID_HOME/assistants/dbca/templates/Domain* -rw-r--r--. 1 grid oinstall 5737 May 5 14:06 /u01/app/122/grid/assistants/dbca/templates/DomainServicesCluster_GIMR.dbc
- Don’t use this in any Production System – This is for testing ONLY !
-> Now start the installation process by invoking gridSetup.sh in $GRID_HOME !
Recreate GNS 12.2
Overview
- Duing a 12.2 Domain Service Cluster installation I’ve filled in the wrong GNS Subdomain name
- This means nlslookup for my SCAN address doesn’t work
- Final cluvfy comamnds reports error : PRVF-5218 : Domain name “dsctw21-vip.dsctw2.example.com” did not resolve to an IP address.
-> So this was a good exercise to verify whetjer my older 12.1 article to recreate GNS also works witht 12.2 !
Backup your RAC profile and local OCR
As of 12.x/11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile - please take a backup of profile.xml
on all cluster nodes before proceeding, as grid user: [grid@dsctw21 peer]$ cd $GRID_HOME/gpnp/dsctw21/profiles/peer/ [grid@dsctw21 peer]$ cp profile.xml profile.xml_backup_5-Mai-2017 [root@dsctw21 ~]# export GRID_HOME=/u01/app/122/grid [root@dsctw21 ~]# $GRID_HOME/bin/ocrconfig -local -manualbackup dsctw21 2017/05/05 17:12:50 /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr 0 dsctw21 2017/05/05 15:07:41 /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr 0 [grid@dsctw21 peer]$ $GRID_HOME/bin/ocrconfig -local -showbackup dsctw21 2017/05/05 17:12:50 /u01/app/122/grid/cdata/dsctw21/backup_20170505_171250.olr 0 dsctw21 2017/05/05 15:07:41 /u01/app/122/grid/cdata/dsctw21/backup_20170505_150741.olr 0 -> Repeat these steps on all of your RAC nodes
Collect Vip Addresses, Device Names, GNS Deails
[root@dsctw21 ~]# $GRID_HOME/bin/oifcfg getif enp0s8 192.168.5.0 global public enp0s9 192.168.2.0 global cluster_interconnect,asm Get the current GNS VIP IP: [root@dsctw21 ~]# $GRID_HOME/bin/crsctl status resource ora.gns.vip -f | grep USR_ORA_VIP GEN_USR_ORA_VIP= USR_ORA_VIP=192.168.5.60 [root@dsctw21 ~]# ifconfig enp0s8 enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.5.151 netmask 255.255.255.0 broadcast 192.168.5.255 [root@dsctw21 ~]# ifconfig enp0s9 enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.2.151 netmask 255.255.255.0 broadcast 192.168.2.255 [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -a -l GNS is enabled. GNS is listening for DNS server requests on port 53 GNS is using port 5353 to connect to mDNS GNS status: Self-check failed. Domain served by GNS: example.com GNS version: 12.2.0.1.0 Globally unique identifier of the cluster where GNS is running: 3a9c87760b7bdf65ffea8852e7dfdae5 Name of the cluster where GNS is running: dsctw2 Cluster type: server. GNS log level: 1. GNS listening addresses: tcp://192.168.5.60:44456. GNS instance role: primary GNS is individually enabled on nodes: GNS is individually disabled on nodes: [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns GNS is enabled. GNS VIP addresses: 192.168.5.60 Domain served by GNS: example.com This should be a subdomain as example.com is our DNS domain !
Stop resources and recreate gns, nodeapps
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan_listener [root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop scan [root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop nodeapps -f [root@dsctw21 ~]# $GRID_HOME/bin/srvctl stop gns [root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove nodeapps Please confirm that you intend to remove node-level applications on all nodes of the cluster (y/[n]) y [root@dsctw21 ~]# $GRID_HOME/bin/srvctl remove gns Remove GNS? (y/[n]) y [root@dsctw21 ~]# $GRID_HOME/bin/srvctl add gns -i 192.168.5.60 -d dsctw2.example.com [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns GNS is enabled. GNS VIP addresses: 192.168.5.60 Domain served by GNS: dsctw2.example.com [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list CLSNS-00005: operation timed out CLSNS-00041: failure to contact name servers 192.168.5.60:53 CLSGN-00070: Service location failed. [root@dsctw21 ~]# $GRID_HOME/bin/srvctl start gns [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314 dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4
zU6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5
L18E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314 dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314 Oracle-GNS A 192.168.5.60 Unique Flags: 0x315 dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315 dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0",
PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315
Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315
--> No VIP IPs !
Recreate Nodeapps
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl add nodeapps -S 192.168.5.0/255.255.255.0/enp0s8 [root@dsctw21 ~]# $GRID_HOME/bin/srvctl start nodeapps PRKO-2422 : ONS is already started on node(s): dsctw21,dsctw22 [root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314 dsctw2.example.com DNSKEY 7 3 10 ( MIIBCgKCAQEAmxQnG2xkpQMXGRXD2tBTZkUKYUsV+Sj/w6YmpFdpMQVoNVSXJCWgCDqIjLrfVA2AQUeEaAek6pfOlMp6Tev2nPVvNqPpul5Fs63cFVzwjdTI4z
U6lSC6+2UVJnAN6BTEmrOzKKt/kuxoNNI7V4DZ5Nj6UoUJ2MXGr/+RSU44GboHnrftvFaVN8pp0TOoOBTj5hHH8C73I+lFfDNhMXEY8WQhb1nP6Cv02qPMsbb8edq1Dy8lt6N6kzjh+9hKPNdqM7HB3OVV5L18
E5HtLjWOhMZLqJ7oDTDsQcMMuYmfFjbi3JvGQrdTlGHAv9f4W/vRL/KV8bDkDFnSRSFubxsbdQIDAQAB ) Unique Flags: 0x314
dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314
dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1
dsctw2-scan A 192.168.5.231 Unique Flags: 0x1
dsctw2-scan1-vip A 192.168.5.231 Unique Flags: 0x1
dsctw21-vip A 192.168.5.233 Unique Flags: 0x1
dsctw22-vip A 192.168.5.237 Unique Flags: 0x1
Oracle-GNS A 192.168.5.60 Unique Flags: 0x315
dsctw2.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 59102 Weight: 0 Priority: 0 Flags: 0x315
dsctw2.Oracle-GNS TXT CLUSTER_NAME="dsctw2", CLUSTER_GUID="3a9c87760b7bdf65ffea8852e7dfdae5", NODE_NAME="dsctw22", SERVER_STATE="RUNNING", VERSION="12.2.0.0.0",
PROTOCOL_VERSION="0xc200000", DOMAIN="dsctw2.example.com" Flags: 0x315 Oracle-GNS-ZM A 192.168.5.60 Unique Flags: 0x315 dsctw2.Oracle-GNS-ZM SRV Target: Oracle-GNS-ZM Protocol: tcp Port: 34148 Weight: 0 Priority: 0 Flags: 0x315 --> GNS knows VIP IPs - Related cluster resources VIPs, GNS and SCAN Listener should be ONLINE ***** Cluster Resources: ***** Resource NAME INST TARGET STATE SERVER STATE_DETAILS --------------------------- ---- ------------ ------------ --------------- ----------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE dsctw22 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE dsctw21 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE dsctw21 STABLE ... ora.dsctw21.vip 1 ONLINE ONLINE dsctw21 STABLE ora.dsctw22.vip 1 ONLINE ONLINE dsctw22 STABLE ora.gns 1 ONLINE ONLINE dsctw22 STABLE ora.gns.vip 1 ONLINE
Verify our NEW created GNS
[root@dsctw21 ~]# $GRID_HOME/bin/srvctl config gns -list dsctw2.example.com DLV 50343 10 18 ( zfiaA8U30oiGSATInCdyN7pIKf1ZIVQhHsF6OQti9bvXw7dUhNmDv/txClkHX6BjkLTBbPyWGdRjEMf+uUqYHA== ) Unique Flags: 0x314 .. dsctw2.example.com NSEC3PARAM 10 0 2 ( jvm6kO+qyv65ztXFy53Dkw== ) Unique Flags: 0x314 dsctw2-scan.dsctw2 A 192.168.5.231 Unique Flags: 0x1 dsctw2-scan.dsctw2 A 192.168.5.234 Unique Flags: 0x1 dsctw2-scan.dsctw2 A 192.168.5.235 Unique Flags: 0x1 dsctw2-scan1-vip.dsctw2 A 192.168.5.231 Unique Flags: 0x1 dsctw2-scan2-vip.dsctw2 A 192.168.5.235 Unique Flags: 0x1 dsctw2-scan3-vip.dsctw2 A 192.168.5.234 Unique Flags: 0x1 dsctw21-vip.dsctw2 A 192.168.5.233 Unique Flags: 0x1 dsctw22-vip.dsctw2 A 192.168.5.237 Unique Flags: 0x1 [root@dsctw21 ~]# nslookup dsctw2-scan.dsctw2.example.com Server: 192.168.5.50 Address: 192.168.5.50#53 Non-authoritative answer: Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.235 Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.234 Name: dsctw2-scan.dsctw2.example.com Address: 192.168.5.231 --> VIPS, SCAN and SCAN VIPS should be ONLINE
Congrats you have successfully reconfigured GNS on 12.2.0.1 !
No comments:
Post a Comment