CRS-2317:Fatal error: cannot get local GPnP security keys (wallet) messages reported in instance alert log

Hi,

After installation Oracle 18c RAC database, I got following error messages in alert log file. Database wass using public ip for the cluster_interconnects instead of HAIP.From.

Alert log file content

[USER(4431)]CRS-2317: Fatal error: cannot get local GPnP security keys (wallet).
[USER(4431)]CRS-2316:Fatal error: cannot initialize GPnP, CLSGPNP_ERR (Generic GPnP error).
kggpnpInit: failed to init gpnp
WARNING: No cluster interconnect has been specified. Depending on
the communication driver configured Oracle cluster traffic
may be directed to the public interface of this machine.
Oracle recommends that RAC clustered databases be configured
with a private interconnect for enhanced security and
performance.
if you check gv$cluster_interconnects view, it returns no rows.
If you are not using Clusterware (Single instance) than you can ignore this messages. But if you are using RAC like my case then workaround is please set database init parameter for cluster interconnects.alter system set cluster_interconnects=’10.1.241.1:10.1.231.1′ scope=both sid=’orcl1′

alter system set cluster_interconnects=’10.1.241.2:10.1.231.2′ scope=both sid=’orcl2′

Restart database and recheck your instance alert logs.

Also check gv$cluster_interconnects

select * from gv$cluster_interconnects;

INST_ID|NAME |IP_ADDRESS |IS_|SOURCE
———-|—————|—————-|—|——————————-
2|eno2 |10.1.231.2 |NO |cluster_interconnects parameter
2|eno3 |10.1.241.2 |NO |cluster_interconnects parameter
1|eno2 |10.1.231.1 |NO |cluster_interconnects parameter
1|eno3 |10.1.241.1 |NO |cluster_interconnects parameter

For example, if you are running two instances of Oracle for two databases on the same machine, then you can load balance the interconnect traffic to different physical interconnects. This does not reduce Oracle availability.

CLUSTER_INTERCONNECTS can be used in Oracle Real Application Clusters environments to indicate cluster interconnects available for use for the database traffic. Use this parameter if you need to override the default interconnect configured for the database traffic, which is stored in the cluster registry. This procedure also may be useful with Data Warehouse systems that have reduced availability requirements and high interconnect bandwidth demands.

Direct NFS: Failed to set socket buffer size.wtmax=[1048576] rtmax=[1048576], errno=-1

Hi,

Today, I faced the below errors in my alert log file.

Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Mon Apr 08 20:00:51 2019
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Mon Apr 08 20:00:51 2019
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Mon Apr 08 20:00:51 2019
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Mon Apr 08 20:00:51 2019
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1
Mon Apr 08 20:00:51 2019
Direct NFS: Failed to set socket buffer size.wtmax=[1056768] rtmax=[1056768], errno=-1

These errors are written to my alert log file during backup operation to NFS mounted backup store. The root cause is tcp_max_buf  is set too small. The error can be fixed by increasing the value of tcp_max_buf.

To display its current value, please run:

# /usr/sbin/ndd /dev/tcp tcp_max_buf
1048576

And increase as following

# /usr/sbin/ndd -set /dev/tcp tcp_max_buf 1056768

Changing the Character Set for a RAC Database Fails with an ORA-12720

Hi,

Changing the Character set for a RAC Database fails with below ORA-12720 error.

ORA-02374: conversion error loading table “RTLSDPROD”.”ZIF_CRANE_MSG_LOG”
ORA-12899: value too large for column MCHN_ID (actual: 11, maximum: 10)
ORA-02372: data for row: MCHN_ID : 0X’52880000603400006034′

The CLUSTER_DATABASE parameter must be temporarily changed in order  to alter the database character set. This will require a shutdown of the entire database (all instances).

Let’s do it step by step

Step 1: Shutdown all instances.

Step 2: On Node1, edit the initialization parameter CLUSTER_DATABASE and set it to FALSE

CLUSTER_DATABASE = FALSE

Step 3: Startup mount the instance in exclusive mode:

SQLPLUS> startup mount exclusive

Step 4: Issue the following commands:

alter system enable restricted session;
alter system set job_queue_processes = 0;
alter system set aq_tm_processes = 0;
alter database open;

Step 5: Change the character set.

Step 6: Shutdown the instance to enable parallel server.

SQLPLUS> shutdown immediate

Step 7: On Node1, edit the init.ora parameter CLUSTER_DATABASE  and set it to TRUE.

CLUSTER_DATABASE = TRUE

Step 8: Start up all the instances accessing the database.

11.2.0.3 RAC binary fails with [INS-35354] on 18c Grid Infrastructure

Hi,

We have installed two node 18c Grid Infrastructure and DB software. Also we need to run 11.2.0.3 database on this 18c Grid Infrastructure. While we install 11.2.0.3 RAC binary, runInstaller fails with below error.

[INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster.

This error occurs due to missing node list / information in inventory.xml of central inventory.

Even The CRS flag set to TRUE for GI Home, the node details were missing in central inventory.

<HOME NAME=”OraGI12Home1″ LOC=”$GRID_HOME” TYPE=”O” IDX=”1″ CRS=”true”/>
For workaround, please run below command with grid os user.

cd $GRID_HOME/oui/bin/

./runInstaller -silent -updateNodeList ORACLE_HOME=$GRID_HOME “CLUSTER_NODES={<nodename1>,<nodename2>}” CRS=TRUE LOCAL_NODE=<nodename1>

After running above command, please rerun 11.2.0.3 ./runInstaller binary.

You can also check Metalink Doc ID 2282456.1 for this issue.

Registration 11.2 DB on 18c Grid Infrastructure fails with CRS-0245

Hi,

When you try to create an 11.2 database instance on 18c Grid Infrastructure using DBCA or  “srvctl add database” command , instance registration fails with CRS-0245 as below.

PRCR-1006 : Failed to add resource ora.orc01.db for orc01
PRCD-1184 : Failed to upgrade configuration of database type to version 11.2.0.3.0
PRCR-1071 : Failed to register or update resource type ora.database.type
CRS-0245:  User doesn’t have enough privilege to perform the operation

This is due to Bug 13460353. For solution you can apply patch number 13460353 to 11.2.0.3 database home. This fix is needed in the 11.2 DB home in order to work with 12c GI.

You can also try the following commands may be used to work around the problems. (as root user)

/u01/app/18.0.0/grid/bin/crsctl modify type ora.database.type -attr “ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=3.2” “-unsupported”
/u01/app/18.0.0/grid/bin/crsctl modify type ora.service.type -attr “ATTRIBUTE=TYPE_VERSION,DEFAULT_VALUE=2.2” “-unsupported”

After running below commands, you can run DBCA or srvctl successfully.