Best Practice: Oracle Active Data Guard and Oracle GoldenGate. When to Use ?

Hello,

In this blog post, We will discuss Oracle Active Data Guard and Oracle GoldenGate. While Oracle Active Data Guard and Oracle GoldenGate are each capable of maintaining a synchronized copy of an Oracle database, each has unique characteristics that result in high availability architectures that can use one technology or the other, or both at the same time, depending upon requirements.
When to Use Oracle Active Data Guard?
You can use Oracle Active Data Guard when the emphasis is on simplicity, data protection, and availability.

1- Simplest, fastest, one-way replication of a complete Oracle database.

2- No restrictions: Data Guard Redo Apply supports all data and storage types and Oracle features; transparent replication of DDL

3- Features optimized for data protection: Detects silent corruptions that can occur on source or target; automatically repairs corrupt blocks

4- Synchronized standby open read-only provides simple read-only offloading for maximum ROI

5- Transparency of backups: A Data Guard primary and standby are physically exact copies of each other; RMAN backups are completely interchangeable

6- Zero data loss protection at any distance, without impacting database performance

7- Minimizing planned downtime and risk using standby first patching, database rolling upgrades, and select platform migrations

8- Reduce risk of introducing change by dual purposing a DR system for testing using Data Guard Snapshot Standby

9- Integrated automatic database and client failover

10 –Integrated management of a complete configuration: Data Guard Broker command line interface or Oracle Enterprise Manager Cloud Control.

When to Use Oracle GoldenGate?

You can use Oracle GoldenGate when the emphasis is on advanced replication requirements not addressed by Oracle Active Data Guard.

1- Any requirement where the replica database must be open read/write while synchronizing with the primary database

2- Any data replication requirements such as multimaster and bidirectional replication, subset replication, many-to-one replication, and data transformations.

3- When data replication is required between endian format platforms or across-database major versions.

4- Maintenance and migrations where zero downtime or near zero downtime is required. Oracle GoldenGate can be used to migrate between application versions without downtime.

5- Database rolling upgrades where it is desired to replicate from new version down to the old version for the purpose of fast fall-back if something is wrong with the upgrade.

6- Zero downtime planned maintenance where bidirectional replication is used to gradually migrate users to the new version, creating the perception of zero downtime. Note that bidirectional replication requires avoiding or resolving update conflicts that can occur on disparate databases.

When to Use Oracle Active Data Guard and Oracle GoldenGate Together?

Oracle Active Data Guard and Oracle GoldenGate are not mutually exclusive. The following are use cases of high availability architectures that include the simultaneous use of Oracle Active Data Guard and Oracle GoldenGate.

1- An Oracle Active Data Guard standby is utilized for disaster protection and database rolling upgrades for a mission critical OLTP database. At the same time, Oracle GoldenGate is used to replicate data from the Data Guard primary database (or from the standby database using Oracle GoldenGate ALO mode) for ETL update of an enterprise data warehouse.

2- Oracle GoldenGate subset replication is used to create an operational data store (ODS) that extracts, transforms, and aggregates data from numerous data sources. The ODS supports mission critical application systems that generate significant revenue for the company. An Oracle Active Data Guard standby database is used to protect the ODS, providing optimal data protection and availability.

3- Oracle GoldenGate bidirectional replication is utilized to synchronize two databases separated by thousands of miles. User workload is distributed across each database based upon geography, workload, and service level using Oracle 12c Global Data Services (GDS). Each Oracle GoldenGate copy has its own local synchronous Data Guard standby database that enables zero data loss failover if an outage occurs. Oracle GoldenGate capture and apply processes are easily restarted on the new primary database following a failover because the primary and standby are an exact, up-to-date replica of each other.

4- An Oracle Active Data Guard standby database used for disaster protection is temporarily converted into an Oracle GoldenGate target for the purpose of performing planned maintenance not supported by Data Guard. For example, a Siebel application upgrade requiring modification of back-end database objects which require comprehensive testing before switching users over to the new system.

5- Oracle Active Data Guard is used to protect a production environment when a major database version upgrade is required offering zero or near-zero downtime (for example, Oracle 11.2.0.3 to 12c.) A second primary/standby environment is created using the new database version, and Oracle GoldenGate is used to replicate data from the production environment to the copy with one-way or bidirectional replication. When Oracle GoldenGate has completed synchronizing the old and new environments, production is switched to the new environment and the old environment is decommissioned. This provides zero or minimal downtime depending upon configuration, eliminates risk by providing complete isolation between the old and new environment, and avoids any impact to data protection and availability SLAs if problems are encountered during the upgrade process.

Advertisement

General Restrictions for Using Earlier Oracle Database Releases with Oracle Grid Infrastructure 19c

Hello,
You can use Oracle Database 19c, 18c, Oracle Database 12c releases 1 and 2, and Oracle Database 11g release 2 (11.2.0.3 or later) with Oracle Grid Infrastructure 19c. You musn’t use the versions of srvctl, lsnrctl, or other Oracle Grid infrastructure home tools to administer earlier version databases. Only administer earlier Oracle Database releases using the tools in the earlier Oracle Database homes. To ensure that the versions of the tools you are using are the correct tools for those earlier release databases, run the tools from the Oracle home of the database.

Oracle Database homes can only be stored on Oracle ASM Cluster File System (Oracle ACFS) if the database version is Oracle Database 11g release 2 (11.2.0.3) or later. When installing 11.2 databases on an Oracle Flex ASM cluster, the Oracle ASM cardinality must be set to All.

To configure earlier release Oracle Database on Oracle ACFS;
– Install Oracle Grid Infrastructure 19c
– Start Oracle ASM Configuration Assistant (ASMCA) as the grid installation owner

./asmca

– Install Oracle Database 11g release 2 (11.2) software-only on the Oracle ACFS file system you configured.

– From the 11.2 Oracle Database home, run Oracle Database Configuration Assistant (DBCA) and create the Oracle RAC Database, using Oracle ASM as storage for the database data files.

./dbca

-Modify the Oracle ACFS path dependency

srvctl modify database -d my_112_db -j Oracle_ACFS_path

To use Oracle ASM with Oracle Database releases earlier than Oracle Database 12c Release 2 (12.2), you must pin all the cluster nodes. After you install Oracle Grid Infrastructure 18c or later release, if you want to use Oracle ASM to provide storage service for Oracle Database releases that are earlier than Oracle Database 12c Release 2 (12.2), then you must use the following command to pin the nodes:

crsctl pin css -n node1 node2

This setting updates the oratab file for Oracle ASM entries.
You can check the pinned nodes using the following command:

./olsnodes -t -n

You must use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install earlier Oracle databases and Oracle RAC databases on Oracle Grid Infrastructure installations. If you need to administer Oracle Database 19c local and scan listeners using the lsnrctl command, set your $ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure home (Grid home).

As mentioned in Oracle Doc ID 337737.1, the following table shows you the compatibility between various versions:

Clusterware ASM DB Certified
21c 21c 21c Y
21c 21c 19c Y
21c 21c 18c Y
21c 21c 12.2 Y
19c 19c 19c Y
19c 19c 18c Y
19c 19c 12.2 Y
19c 19c 12.1 Y
19c 19c 11.2(a) Y
18c 18c 18c Y
18c 18c 12.2 Y
18c 18c 12.1 Y
18c 18c 11.2(a) Y
12.2 12.2 12.2 Y
12.2 12.2 12.1 Y
12.2 12.2 11.2(a) Y
12.1 12.1 12.1 Y
12.1 12.1 11.2(a) Y
12.1 12.1 11.1(a) Y
12.1 12.1 10.2(a) Y
11.2 11.2(b) 11.2 Y
11.2 11.2(b) 11.1 Y
11.2 11.2(b) 10.2 Y
11.1 11.1 11.1(c) Y
11.1 11.1 10.2 Y
11.1 11.1 10.1 Y
11.1 10.2 11.1 Y
11.1 10.2 10.2 Y
11.1 10.2 10.1 Y
11.1 10.1(d) 11.1 Y
11.1 10.1(d) 10.2 Y
11.1 10.1 10.1 Y
10.2 10.2 10.2 Y
10.2 10.2 10.1 Y
10.2 10.1(d) 10.2 Y
10.2 10.1 10.1 Y
10.1 10.1 10.1 Y

(a) Pre-12.1 database instances require an ASM instance resident on the same node as the database instance. Pre-12.1 database instances cannot leverage the implicit HA of Flex ASM.
(b) The Matrix is valid after the (rolling) upgrade has been completed. During the upgrade you may use an older ASM version.
(c) Linux specific: see My Oracle Note 781628.1
(d) The ASM version needs to be at least 10.1.0.3

Before 18c, the Oracle Grid Infrastructure (GI) /Clusterware (CRS) version must be of equal or the highest version down to the 4th digit in the possible combinations at all times.

Starting from 18c, the Oracle Grid Infrastructure (GI) /Clusterware (CRS) version must be of equal or the highest version down to the first digit in the possible combinations at all times. For example: Grid infrastructure can be at 18.1.0.0 and Database can be at 18.3.0.0
Oracle Grid Infrastructure/Clusterware must be installed into its own ORACLE_HOME (also referenced as CRS_HOME)

You can have multiple ORACLE_HOMEs for the database releases.

Starting with Oracle Database 10g Release 2, ASM and database can be installed in separate homes. Since Oracle Database 11g Release 2 ASM is part of the Grid Infrastructure.

When mixing software versions ASM functionality reverts to the functionality of the earliest release in use. For example, a 10.1.0.3 RDBMS instance working with a 10.2 ASM instance does not take advantage of new features in ASM. Conversely, 10.2 RDBMS instance working with a 10.1 ASM instance, will not take advantage of any of the new 10.2 features of ASM.

You can use different users for Oracle Grid Infrastructure/Clusterware and database homes as long as they belong to the same primary group.

Database users do not need access to the OCR and voting disk.

How to Implement Oracle Linux 7 Cluster Using Pacemaker and Corosync Using STONITH Block Devices in Oracle VM?

Hello,
Corosync/Pacemaker is a supported high availability cluster solution for Oracle Linux 7.3 or newer version. The main component for a Corosync/Pacemaker cluster is to have a reliable fencing agent, to prevent data corruption. The recommended solution is to use a generic fence. There are two agents that can be used: fence_scsi and fence_sbd.

In this blog post we will describe on how to use fence_sbd.

STONITH Block Devices (SBD) use a shared disk that will be used as a fence device. This disk must be used exclusively as sbd device, no filesystem should be created in the disk.

To enable SBD fencing:

1- Install SBD and watchdog (all nodes):

yum install -y watchdog sbd

2- Configure softdog as a watchdog device and start automatic at boot time (all nodes):

echo softdog > /etc/modules-load.d/softdog.conf
/sbin/modprobe softdog

3- Change the SBD configuration SBD_DEVICE to point to the shared disk (all nodes):

vi /etc/sysconfig/sbd
Change:
SBD_DEVICE=”/dev/xvdb” # /dev/xvdb is the shared disk
SBD_OPTS=”-n node1″ # if cluster node name is different from hostname this option must be used

4- Create the SBD device (just in one node)

pcs stonith sbd device setup –device=/dev/xvdb

Enable SBD service (all nodes):

systemctl enable –now sbd

SBD is configured. The Pacemaker STONITH fence can be created (just one node):

pcs stonith create sbd_fencing fence_sbd devices=/dev/xvdb

To test is the SBD is working:

pcs stonith fence node2

node2 should be rebooted.

Oracle Database 23c: 10 New Magic Features

Hello,
Oracle announced the new long term release of 23c Beta. In this blog post, we will discuss some Oracle Database 23c New magic features.

1- The DUAL table is not needed anymore.

select ‘Talip Hakan’;
select 9+8 ;
select sysdate;

2-Using the new IF EXISTS and IF NOT EXIST(S?) clauses in DDL statements.

drop table IF EXIST DBMASTER_TEST;
create table IF NOT EXIST DBMASTER_TEST (id number, name varchar2(10));

3- Using the column alias/position number in GROUP BY and HAVING.

select count(*) as amount, upper(d.dname) as department_name
from dept d join emp e on d.deptno = e.deptno
group by upper(d.dname)
having count(*) > 3;

Now, you can write above query as below.

select count(*) as amount, upper(d.dname) as department_name
from dept d join emp e on d.deptno = e.deptno
group by department_name
having amount > 3;

From Oracle 23c, Also we can make use of the column position in the GROUP BY clause.

alter session set group_by_position_enabled=true;

select count(*) as amount, upper(d.dname) as department_name
from dept d join emp e on d.deptno = e.deptno
group by 2
having amount > 3;

4- You can give schema Level Privileges

grant select any table on schema HR to hakan;

5- You can allow to group multiple rows of data in a single DML or SELECT statement.

select *
from (values
(10, ‘test’, ‘test1’),
(20, ‘test’, ‘test2’),
(30, ‘test’, ‘test3’)
) a (id, txt, desc);

Using With Clause;

with a (id, txt, desc) AS (
values (10, ‘test’, ‘test1’),
(20, ‘test’, ‘test2’),
(30, ‘test’, ‘test3’)
)
select * from a;

Using MERGE Statement;

merge into DBMTEST a
using (values
(10, ‘test’, ‘test1’),
(20, ‘test’, ‘test2’),
(30, ‘test’, ‘test3’)
) b (id, txt, desc)
on (a.id = b.id)
when matched then
update set a.txt= b.txt,
a.desc= b.desc
when not matched then
insert (a.id, a.txt, a.desc)
values (b.id, b.txt, b.desc);

Using INSERT Statement;

insert into DBMTEST
values (10, ‘test’, ‘test1’),
(20, ‘test’, ‘test2’),
(30, ‘test’, ‘test3’);
commit;

6- Give to user necessary privileges for doing database development.

grant db_developer_role to talip;
revoke db_developer_role from talip;

begin
dbms_developer_admin.grant_privs(‘talip’);
end;
/

Note that the db_developer_role is created in $ORACLE_HOME/rdbms/admin/catdevrol.sql

7- Oracle 23c introduced the MAX_COLUMNS initialization parameter allows us to have up to 4096 columns in a table. Up to 4096 columns per table (requires max_columns to be set to extended and compatibility to be set to 23.0.0)

show parameter max_column
alter system set max_columns=EXTENDED scope=spfile;

When this parameter is set to STANDARD, the maximum number of columns allowed in a database table or view is 1000. When this parameter is set to EXTENDED, the maximum number of columns allowed in a database table or view is 4096. You can change the value of MAX_COLUMNS from STANDARD to EXTENDED at any time. However, you can change the value of MAX_COLUMNS from EXTENDED to STANDARD only when all tables and views in the database have 1000 or fewer columns.

8-Oracle database 23c introduced the boolean data type for SQL. The datatype can be bool or boolean

create table dbmaster_test (val varchar2(10), flag boolean);
insert into dbmaster_test values (‘true’ , true );
insert into dbmaster_test values (‘false’ , false);
insert into dbmaster_test values (‘null’ , null );
insert into dbmaster_test values (‘t’ ,’t’ );
insert into dbmaster_test values (‘yes’ ,’yes’ );
insert into dbmaster_test values (‘true’ ,’true’);
insert into dbmaster_test values (‘f’ ,’f’ );
insert into dbmaster_test values (‘0’ , 0 );

select val from dbmaster_test where flag ;

or

select val from dbmaster_test where flag is true ;
select val from dbmaster_test where not flag ;

9-Using direct joins to tables to drive UPDATE and DELETE statements.

update dbmaster_test1 a
set a.txt= b.txt,
a.desc = b.desc
from dbmaster_test2 b
where a.id = b.id
and b.id <= 5;

delete dbmaster_test1 a
from dbmaster_test2 b
where a.id = b.id
and b.id <= 5;

10-Oracle 23c free developer-release can be installed on Oracle Linux using an rpm package. We will discuss it on another blog post.

How to Determine When an Index Should be Rebuilt?

Hello,
The reason to rebuild an index should be because of poor performance of your queries using the index. In this blog we will discuss to decide if index rebuild is required or not and if index coalesce is better. There have been many discussions about whether rebuilding indexes is useful or not. Generally speaking, the need to rebuild b-tree indexes is very rare, basically because a b-tree index is largely self-managed or self-balanced.

The most common justifications given for rebuilding an index are:
– index becomes fragmented
– index grows and grows – deleted space is not re-used
– index clustering factor becomes out of sync

In fact most indexes remain both balanced and fragmentation-free because free leaf entries will be reused. Inserts/Updates and Deletes result in free slots being scattered around the index blocks, but these will typically be refilled. The clustering factor reflects how sorted the table data is with respect to the given index key. Rebuilding an index never has an influence on the clustering factor but instead requires a table re-organization.
Most scripts around depend on the index_stats dynamic table. This is populated by the command:

analyze index index_owner.index_name validate structure;

While this is a valid method to inspect the index, it grabs an exclusive table lock while analyzing the index. Especially for large indexes, this can be very dramatic, as DML operations on the table are not permitted during that time. While it can be run online without the locking considerations, it may consume additional time.

Redo activity may increase and general performance might be impacted as a direct result of rebuilding an index. Insert/update/delete causes the index to evolve over time as the index splits and grows. As a result, the redo activity increases and the index splits are now more likely to impact performance directly as we consume more I/O, CPU, etc to serve the index restructuring. After a certain period of time the index may again experience ‘issues’ and may be re-flagged for a rebuild, causing the vicious cycle to continue. An index coalesce is often preferred instead of an index rebuild. It has the following advantages:
– does not require approximately 2 times the disk storage
– always online
– does not restructure the index, but combines index leaf blocks as much as possible, avoiding system overhead as explained in point 2.

To rebuild indexes;

1- Find indexes having height(blevel+1) > 4 . Indexes having BLEVEL > 3

SQL> select owner, index_name, table_name, blevel from dba_indexes where BLEVEL>3

2- However, rebuilding an index may help performance in specific cases.

SQL> analyze index TEST_INDX validate structure; — First analyze the suspect index

Index analyzed.

3- Analyze indexes to find ratio of (DEL_LF_ROWS/LF_ROWS*100) is > 20 by “analyzing the index with validate structure option” and then:

SQL> SELECT name, height, lf_rows, del_lf_rows, (del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;

4- Now you can rebuild the indexes

SQL> alter index TEST_INDX rebuild;