Quantcast
Channel: 12c database – Oracle DBA – Tips and Techniques
Viewing all 77 articles
Browse latest View live

Oracle 12c New Feature – Privilege Analysis

$
0
0

In many databases we find that over the course of time certain users particularly application owner schemas and developer user accounts have been granted excessive privileges – more than what they need to do their job as developers or required for the application to perform normally.

Excessive privileges violate the basic security principle of least privilege.

In Oracle 12c now we have a package called DBMS_PRIVLEGE_CAPTURE through which we can identify unnecessary object and system privileges which have been granted and revoke privileges which have been granted but which have not yet been used.

The privilege analysis can be at the entire database level, or based on a particular role or context-specific – like a particular user in the database.

These are the main steps involved:

1) Create the Database, Role or Context privilege analysis policy via DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE
2) Start the analysis of used privileges via DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
3)Stop the analysis when required via DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE
4) Generate the report via DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
5) Examine the views like DBA_USED_SYSPRIVS, DBA_USED_OBJPRIVS,DBA_USED_PRIVS, DBA_UNUSED_PRIVS etc

In this example below we do context-based analysis – the role ‘DBA’ and the user ‘SH’.

SQL> alter session set container=sales;

Session altered.

SQL>  grant dba to sh;

Grant succeeded.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(-
> name => 'AUDIT_DBA_SH',-
>  type => dbms_privilege_capture.g_role_and_context,-
> roles => role_name_list ('DBA'),-
> condition => 'SYS_CONTEXT (''USERENV'',''SESSION_USER'')=''SH''');

PL/SQL procedure successfully completed.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE(-
> name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> conn sh/sh@sales
Connected.


SQL> alter user hr identified by hr;

User altered.

SQL> create table myobjects as select * from all_objects;
create table myobjects as select * from all_objects
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object


SQL> drop table myobjects;

Table dropped.

SQL> alter tablespace users offline;

Tablespace altered.

SQL> alter tablespace users online;

Tablespace altered.



SQL> conn / as sysdba
Connected.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.


SQL> select name,type,enabled,roles,context
  2  from dba_priv_captures;

NAME           TYPE             E ROLES           CONTEXT
-------------- ---------------- - --------------- ------------------------------------------------------------
AUDIT_DBA_SH   ROLE_AND_CONTEXT N ROLE_ID_LIST(4) SYS_CONTEXT ('USERENV','SESSION_USER')='SH'


SQL> select username,sys_priv from dba_used_sysprivs;


USERNAME             SYS_PRIV
-------------------- ----------------------------------------
SH                   CREATE SESSION
SH                   ALTER USER
SH                   CREATE TABLE
SH                   ALTER TABLESPACE

Upgrade Grid Infrastructure 11g (11.2.0.3) to 12c (12.1.0.2)

$
0
0

I have recently tested the upgrade to RAC Grid Infrastructure 12.1.0.2 on my test RAC Oracle Virtualbox Linux 6.5 x86-64 environment.

The upgrade went very smoothly but we have to take a few things into account – some things have changed in 12.1.0.2 as compared to Grid Infrastructure 12.1.0.1.

The most notable change regards the Grid Infrastructure Management Repository (GIMR).

In 12.1.0.1 we had the option of installing the GIMR database – MGMTDB. But in 12.1.0.2 it is mandatory and the MGMTDB database is automatically created as part of the upgrade or initial installation process of 12.10.2 Grid Infrastructure.

The GIMR primarily stores historical Cluster Health Monitor metric data. It runs as a container database on a single node of the RAC cluster.

The problem I found is that the datafiles for the MGMTDB database are created on the same ASM disk group which holds the OCR and Voting Disk and there is a prerequisite that there is at least 4 GB of free space in that ASM disk group – or an error INS-43100 will be returned as shown in the figure below.

I had to cancel the upgrade process and add another disk to the +OCR ASM disk group to ensure that at least 4 GB of free space was available and after that the upgrade process went through very smoothly.

 

 

 

On both the nodes of the RAC cluster we will create the directory structure for the 12.1.0.2 Grid Infrastructure environment as this is an out-of-place upgrade.

Also it is very important to check the health of the RAC cluster before the upgrade (via the crsctl check cluster -all command) and also run the cluvfy.sh script to verify all the prerequisites for the 12c GI upgrade are in place.

[oracle@rac1 bin]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]

[oracle@rac1 bin]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]

[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u02/app/12.1.0/grid -dest_version 12.1.0.2.0

 

 

 

 

[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@rac1 ~]$ ps -ef |grep pmon
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1

[root@rac1 bin]# ./oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.873212089

[root@rac1 bin]# ./srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.

[root@rac1 bin]# ./srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB

Oracle 12c New Feature Real-Time Database Monitoring

$
0
0

Real -Time Database Monitoring which is a new feature in Oracle 12c extends Real-Time SQL Monitoring which was a feature introduced in Oracle 11g. The main difference is related to the fact that SQL Monitoring only applies to a single SQL statement .

Very often we run batch jobs and those batch jobs in turn invoke many SQL statements. When batch jobs run slowly all of a sudden it becomes very difficult to identify which of those individual SQL statements which are part of the batch job are now contributing to the performance issue – or maybe batch jobs have started running slowly only after a database upgrade has been performed and we need to identify which particular SQL statement or statements have suffered from performance regressions after the upgrade.

The API used for Real-Time Database Monitoring is the DBMS_SQL_MONITOR package with the  BEGIN_OPERATION and END_OPERATION calls.

So what is a Database Operation?

A database operation is single or multiple SQL statements and/or PL/SQL blocks between two points in time.

Basically to monitor a database operation it needs to be given a name along with a begin and end point.

The database operation name along with its execution ID will help us identify the operation and we can use several views for this purpose like V$SQL_MONITOR as well as V$ACTIVE_SESSION_HISTORY via the DBOP_NAME and DBOP_EXEC_ID columns.

Let us look an example of monitoring database operations using Oracle 12c Database Express.

We create a file called mon.sql and will run it in the SH schema while using Database Express to monitor the operation.

The name of the database operation is DBOPS and we are running a number of SQL statements as part of the same database operation.

DECLARE
n NUMBER;
m  number;
BEGIN
n := dbms_sql_monitor.begin_operation(‘DBOPS’);
END;
/

drop table sales_copy;
CREATE TABLE SALES_COPY AS SELECT * FROM SALES;
INSERT INTO SALES_COPY SELECT * FROM SALES;
COMMIT;
DELETE SALES_COPY;
COMMIT;
SELECT * FROM SALES ;
select * from sales where cust_id=1234;

DECLARE
m NUMBER;
BEGIN
select dbop_exec_id into m from v$sql_monitor
where dbop_name=’DBOPS’
and status=’EXECUTING';
dbms_sql_monitor.end_operation(‘DBOPS’, m);
END;
/

From the Database Express 12c Performance menu > Performance Hub > Monitored SQL

In this figure we can see that the DBOPS database operation is still running.

Click the DBOPS link in the ID column

 

We can see the various SQL statements which are running as part of the operation and we can also see that one particular SQL is taking much more database time as compared to the other 3 SQL ID’s.

 

 

The DELETE SALES_COPY SQL statement is taking over 30 seconds of database time as compared to other statements which are taking around just a second of database time in comparison. It is consuming close to 2 million buffer gets as well.

So we know that for this particular database operation, which is the most costly single SQL statement.

 

 

 

We can now see that the database operation is finally complete and it has taken 42 seconds of database time.

Oracle Grid Infrastructure Patch Set Update 12.1.0.2.2 (Jan 2015)

$
0
0

This note details the process of applying the Grid Infrastructure 12.1.0.2 PSU Jan 2015 patch on a two node Linux x86-64 RAC cluster.

The patch required is 19954978 and this in turn will apply the 19769480 patch to the database home as well.

The patch is rolling applicable and so involves minimal downtime.

Before applying the patch we need to do a couple of things first (on both nodes).

  • Ensure opatch is a minimum version of 12.1.0.1.5 on both the Grid as well as Database home on both nodes.

[oracle@rac1 OPatch]$ ./opatch version
OPatch Version: 12.1.0.1.6

OPatch succeeded.

 

  • Create an OCM response file - again on both nodes

[oracle@rac1 dbhome_1]$ echo $ORACLE_HOME
/u02/app/oracle/product/12.1.0/dbhome_1

[oracle@rac1 dbhome_1]$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/file.rsp

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (/tmp/file.rsp) was successfully created.

 

  • Create a directory for the PSU patch - make sure the directory has no contents.

[oracle@rac1 dbhome_1]$ cd /u02/oracle/software/
[oracle@rac1 software]$ mkdir GI_PSU_JAN15
[oracle@rac1 software]$ cd GI_PSU_JAN15/
[oracle@rac1 GI_PSU_JAN15]$ mv /media/sf_software/p19954978_121020_Linux-x86-64.zip .

 

Check for any patch conflicts

[root@rac1 OPatch]# ./opatchauto apply /u02/oracle/software/GI_PSU_JAN15/19954978 -analyze -ocmrf /tmp/file.rsp -oh /u02/app/12.1.0/grid
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/12.1.0/grid

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-02_21-46-22_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

OCM RSP file has been ignored in analyze mode.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u02/oracle/software/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
User specified following Grid Infrastructure home:
/u02/app/12.1.0/grid

Analyzing patch(es) on “/u02/app/12.1.0/grid” …
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769473″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769479″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769480″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19872484″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /u02/app/12.1.0/grid: 19769473,19769479,19769480,19872484

opatchauto succeeded.

[root@rac1 OPatch]# ./opatchauto apply /u02/oracle/software/GI_PSU_JAN15/19954978 -analyze -ocmrf /tmp/file.rsp -database orcl
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/oracle/product/12.1.0/dbhome_1

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-02_21-49-41_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

OCM RSP file has been ignored in analyze mode.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u02/oracle/software/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
User specified the following DB home(s) for this session:
/u02/app/oracle/product/12.1.0/dbhome_1

Analyzing patch(es) on “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769479″ successfully analyzed on “/u02/app/oracle/product/12.1.0/dbhome_1″ for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769480″ successfully analyzed on “/u02/app/oracle/product/12.1.0/dbhome_1″ for apply.

[WARNING] The local database(s) on “/u02/app/oracle/product/12.1.0/dbhome_1″ is not running. SQL changes, if any, cannot be analyzed.

Apply Summary:
Following patch(es) are successfully analyzed:
DB Home: /u02/app/oracle/product/12.1.0/dbhome_1: 19769479,19769480

opatchauto succeeded.

 

Ensure that you have enough disk space for the Jan 2015 PSU!!

Although the patch is around 800 MB, when we unzip the patch it occupies over 3 GB of disk space. In addition the patch application requires over 12 GB of free disk space on the Grid Infrastructure home and over 5 GB of free space for the Database home.

Otherwise the patch will fail with an error like the one shown below:

Applying patch(es) to “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Command “/u02/app/oracle/product/12.1.0/dbhome_1/OPatch/opatch napply -phBaseFile /tmp/OraDB12Home1_oracle_patchList -local -invPtrLoc /u02/app/12.1.0/grid/oraInst.loc -oh /u02/app/oracle/product/12.1.0/dbhome_1 -silent -ocmrf /tmp/file.rsp” execution failed:
UtilSession failed:
Prerequisite check “CheckSystemSpace” failed.

I had to create another mount point with adequate disk space and create a soft link from the existing mount point holding the GI and Database Oracle Homes to the new mount point.

 

  • Apply the Jan 2015 PSU patch

[root@rac1 OPatch]# ./opatchauto apply /u01/app/GI_PSU_JAN15/19954978 -ocmrf /tmp/file.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/12.1.0/grid

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-03_11-53-43_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u01/app/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
Grid Infrastructure home:
/u02/app/12.1.0/grid
DB home(s):
/u02/app/oracle/product/12.1.0/dbhome_1

Performing prepatch operations on RAC Home (/u02/app/oracle/product/12.1.0/dbhome_1) … Successful
Following database(s) and/or service(s) were stopped and will be restarted later during the session: orcl

Applying patch(es) to “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Patch “/u01/app/GI_PSU_JAN15/19954978/19769479″ successfully applied to “/u02/app/oracle/product/12.1.0/dbhome_1″.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769480″ successfully applied to “/u02/app/oracle/product/12.1.0/dbhome_1″.

Performing prepatch operations on CRS Home… Successful

Applying patch(es) to “/u02/app/12.1.0/grid” …
Patch “/u01/app/GI_PSU_JAN15/19954978/19769473″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769479″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769480″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19872484″ successfully applied to “/u02/app/12.1.0/grid”.

Performing postpatch operations on CRS Home… Successful

Performing postpatch operations on RAC Home (/u02/app/oracle/product/12.1.0/dbhome_1) … Successful

SQL changes, if any, are applied successfully on the following database(s): orcl

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u02/app/12.1.0/grid: 19769473,19769479,19769480,19872484
DB Home: /u02/app/oracle/product/12.1.0/dbhome_1: 19769479,19769480

opatchauto succeeded.

 

  • Loading Modified SQL Files into the Database

On only one node of the RAC cluster we need to start the database instance and execute the following command:

cd $ORACLE_HOME/OPatch

./datapatch -verbose

 

  • Verify the patch installation

SQL> select patch_id, VERSION, ACTION,STATUS, DESCRIPTION from dba_registry_sqlpatch;

PATCH_ID VERSION ACTION STATUS
———- ——————– ————— —————
DESCRIPTION
——————————————————————————–
19769480 12.1.0.2 APPLY SUCCESS
Database Patch Set Update : 12.1.0.2.2 (19769480)

[oracle@rac1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.6
Copyright (c) 2015, Oracle Corporation. All rights reserved.

Oracle Home : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version : 12.1.0.1.6
OUI version : 12.1.0.2.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2015-03-03_15-41-19PM_1.log

Lsinventory Output file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-03-03_15-41-19PM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Database 12c 12.1.0.2.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch 19769480 : applied on Tue Mar 03 07:28:45 WST 2015
Unique Patch ID: 18350083
Patch description: “Database Patch Set Update : 12.1.0.2.2 (19769480)”
Created on 15 Dec 2014, 06:54:52 hrs PST8PDT
Bugs fixed:
20284155, 19157754, 18885870, 19303936, 19708632, 19371175, 18618122
19329654, 19075256, 19074147, 19044962, 19289642, 19068610, 18988834
19028800, 19561643, 19058490, 19390567, 18967382, 19174942, 19174521
19176223, 19501299, 19178851, 18948177, 18674047, 19723336, 19189525
19001390, 19176326, 19280225, 19143550, 18250893, 19180770, 19155797
19016730, 19185876, 18354830, 19067244, 18845653, 18849537, 18964978
19065556, 19440586, 19439759, 19024808, 18952989, 18990693, 19052488
19189317, 19409212, 19124589, 19154375, 19279273, 19468347, 19054077
19048007, 19248799, 19018206, 18921743, 14643995, 18456643, 16870214
19434529, 19706965, 17835294, 20074391, 18791688, 19197175, 19134173
19174430, 19050649, 19769480, 19077215, 19577410, 18288842, 18436647
19520602, 19149990, 19076343, 19195895, 18610915, 19068970, 19518079
19304354, 19001359, 19676905, 19309466, 19382851, 18964939, 16359751
19022470, 19532017, 19597439, 18674024, 19430401

Patch 19769479 : applied on Tue Mar 03 07:28:27 WST 2015
Unique Patch ID: 18256426
Patch description: “OCW Patch Set Update : 12.1.0.2.2 (19769479)”
Created on 22 Dec 2014, 20:20:11 hrs PST8PDT
Bugs fixed:
19700294, 19164099, 19331454, 18589889, 19139608, 19280860, 18508710
18955644, 19061429, 19146822, 18798432, 19133945, 19341538, 18946768
19135521, 19537762, 19361757, 19187207, 19302350, 19130141, 16286734
19699720, 19168690, 19266658, 18762843, 18899171, 18945249, 19045143
19146980, 19244316, 19184799, 19471722, 18634372, 19027351, 19205086
18707416, 19184188, 19131709, 19281106, 19537547, 18862203, 19079087
19031737, 20006646, 18991776, 18439295, 19380733, 19150517, 19148367
18968981, 20231741, 18943696, 19217019, 18135723, 19163425, 19524857
18849021, 18730096, 18890943, 18975620, 19205617, 18861196, 19154753
17940721, 19150313, 18843054, 18708349, 19522313, 18748932, 18835283
18953639, 19184765, 19499021, 19067804, 19046190, 19371270, 19051385
19318983, 19209951, 19054979, 19050688, 19154673, 18752378, 19226141
19053891, 18871287, 19150088, 18998228, 18922918, 18980002, 19013444
19683886, 19234177, 18956780, 18998379, 20157569, 18777835, 19273577
19026993, 17338864, 19367276, 19075747, 19513650, 18990354, 19288396
19702758, 19427050, 18952577, 19414274, 19127078, 19147513, 18910443
20053557, 19473088, 19315567, 19148982, 18290252, 19178517, 18813323
19500293, 19529729, 18643483, 19455563, 19134098, 18523468, 19277814
19319904, 18703978, 19071526, 18536826, 18965694, 19703246, 19292605
19226858, 18850051, 19602208, 19192901, 18417590, 19370739, 18920408
18636884, 18776786, 18989446, 19148793, 19043795, 19585454, 18260170
18317489, 19479503, 19029647, 19179158, 18919682, 18901356, 19140712
19807548, 19124972, 18678829, 18910748, 18849896, 19147509, 19076165
18953878, 19273758, 19498411, 18964974, 18999195, 18759724, 18835366
19459023, 19184276, 19013789, 19207286, 18950232, 19680763, 19259765
19066844, 19148791, 19234907, 19538714, 19449737, 19649640, 18962892
19062675, 19187515, 19513969, 19513888, 19230771, 18859710, 19504641
19453778, 19341481, 19343245, 18304090, 19314048, 19473851, 19068333
18834934, 18843572, 19241655, 19470791, 19458082, 18242738, 18894342
19185148, 18945435, 18372060, 19232454, 18953889, 18541110, 19319192
19023430, 19204743, 19140711, 19259290, 19178629, 19045388, 19304104
19241857, 19522571, 19140891, 19076778, 18875012, 19270660, 19457575
19066699, 18861564, 19021575, 19069755, 19273760, 18715884, 19225265
19584688, 18798573, 19018001, 19325701, 19292272, 18819158, 19270956
19068003, 18937186, 19049721, 19368917, 19222693, 18700893, 18406774
18868829, 19010177, 19141785, 19163887, 18852058, 18715868, 19538241, 19804032

Rac system comprising of multiple nodes
Local node = rac1
Remote node = rac2

——————————————————————————–

OPatch succeeded.
[oracle@rac1 OPatch]$

[oracle@rac1 OPatch]$ . oraenv
ORACLE_SID = [orcl1] ? +ASM1
The Oracle base remains unchanged with value /u02/app/oracle
[oracle@rac1 OPatch]$ cd /u02/app/12.1.0/grid/OPatch
[oracle@rac1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.6
Copyright (c) 2015, Oracle Corporation. All rights reserved.

Oracle Home : /u02/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
from : /u02/app/12.1.0/grid/oraInst.loc
OPatch version : 12.1.0.1.6
OUI version : 12.1.0.2.0
Log file location : /u02/app/12.1.0/grid/cfgtoollogs/opatch/opatch2015-03-03_15-43-45PM_1.log

Lsinventory Output file location : /u02/app/12.1.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-03-03_15-43-45PM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Grid Infrastructure 12c 12.1.0.2.0
There are 1 products installed in this Oracle Home.

Interim patches (4) :

Patch 19872484 : applied on Tue Mar 03 09:54:54 WST 2015
Unique Patch ID: 18291456
Patch description: “WLM Patch Set Update: 12.1.0.2.2 (19872484)”
Created on 2 Dec 2014, 23:18:41 hrs PST8PDT
Bugs fixed:
19016964, 19582630

Patch 19769480 : applied on Tue Mar 03 09:54:49 WST 2015
Unique Patch ID: 18350083
Patch description: “Database Patch Set Update : 12.1.0.2.2 (19769480)”
Created on 15 Dec 2014, 06:54:52 hrs PST8PDT
Bugs fixed:
20284155, 19157754, 18885870, 19303936, 19708632, 19371175, 18618122
19329654, 19075256, 19074147, 19044962, 19289642, 19068610, 18988834
19028800, 19561643, 19058490, 19390567, 18967382, 19174942, 19174521
19176223, 19501299, 19178851, 18948177, 18674047, 19723336, 19189525
19001390, 19176326, 19280225, 19143550, 18250893, 19180770, 19155797
19016730, 19185876, 18354830, 19067244, 18845653, 18849537, 18964978
19065556, 19440586, 19439759, 19024808, 18952989, 18990693, 19052488
19189317, 19409212, 19124589, 19154375, 19279273, 19468347, 19054077
19048007, 19248799, 19018206, 18921743, 14643995, 18456643, 16870214
19434529, 19706965, 17835294, 20074391, 18791688, 19197175, 19134173
19174430, 19050649, 19769480, 19077215, 19577410, 18288842, 18436647
19520602, 19149990, 19076343, 19195895, 18610915, 19068970, 19518079
19304354, 19001359, 19676905, 19309466, 19382851, 18964939, 16359751
19022470, 19532017, 19597439, 18674024, 19430401

Patch 19769479 : applied on Tue Mar 03 09:54:10 WST 2015
Unique Patch ID: 18256426
Patch description: “OCW Patch Set Update : 12.1.0.2.2 (19769479)”
Created on 22 Dec 2014, 20:20:11 hrs PST8PDT
Bugs fixed:
19700294, 19164099, 19331454, 18589889, 19139608, 19280860, 18508710
18955644, 19061429, 19146822, 18798432, 19133945, 19341538, 18946768
19135521, 19537762, 19361757, 19187207, 19302350, 19130141, 16286734
19699720, 19168690, 19266658, 18762843, 18899171, 18945249, 19045143
19146980, 19244316, 19184799, 19471722, 18634372, 19027351, 19205086
18707416, 19184188, 19131709, 19281106, 19537547, 18862203, 19079087
19031737, 20006646, 18991776, 18439295, 19380733, 19150517, 19148367
18968981, 20231741, 18943696, 19217019, 18135723, 19163425, 19524857
18849021, 18730096, 18890943, 18975620, 19205617, 18861196, 19154753
17940721, 19150313, 18843054, 18708349, 19522313, 18748932, 18835283
18953639, 19184765, 19499021, 19067804, 19046190, 19371270, 19051385
19318983, 19209951, 19054979, 19050688, 19154673, 18752378, 19226141
19053891, 18871287, 19150088, 18998228, 18922918, 18980002, 19013444
19683886, 19234177, 18956780, 18998379, 20157569, 18777835, 19273577
19026993, 17338864, 19367276, 19075747, 19513650, 18990354, 19288396
19702758, 19427050, 18952577, 19414274, 19127078, 19147513, 18910443
20053557, 19473088, 19315567, 19148982, 18290252, 19178517, 18813323
19500293, 19529729, 18643483, 19455563, 19134098, 18523468, 19277814
19319904, 18703978, 19071526, 18536826, 18965694, 19703246, 19292605
19226858, 18850051, 19602208, 19192901, 18417590, 19370739, 18920408
18636884, 18776786, 18989446, 19148793, 19043795, 19585454, 18260170
18317489, 19479503, 19029647, 19179158, 18919682, 18901356, 19140712
19807548, 19124972, 18678829, 18910748, 18849896, 19147509, 19076165
18953878, 19273758, 19498411, 18964974, 18999195, 18759724, 18835366
19459023, 19184276, 19013789, 19207286, 18950232, 19680763, 19259765
19066844, 19148791, 19234907, 19538714, 19449737, 19649640, 18962892
19062675, 19187515, 19513969, 19513888, 19230771, 18859710, 19504641
19453778, 19341481, 19343245, 18304090, 19314048, 19473851, 19068333
18834934, 18843572, 19241655, 19470791, 19458082, 18242738, 18894342
19185148, 18945435, 18372060, 19232454, 18953889, 18541110, 19319192
19023430, 19204743, 19140711, 19259290, 19178629, 19045388, 19304104
19241857, 19522571, 19140891, 19076778, 18875012, 19270660, 19457575
19066699, 18861564, 19021575, 19069755, 19273760, 18715884, 19225265
19584688, 18798573, 19018001, 19325701, 19292272, 18819158, 19270956
19068003, 18937186, 19049721, 19368917, 19222693, 18700893, 18406774
18868829, 19010177, 19141785, 19163887, 18852058, 18715868, 19538241, 19804032

Patch 19769473 : applied on Tue Mar 03 09:52:59 WST 2015
Unique Patch ID: 18256364
Patch description: “ACFS Patch Set Update : 12.1.0.2.2 (19769473)”
Created on 2 Dec 2014, 23:02:26 hrs PST8PDT
Bugs fixed:
19452723, 19078259, 19919907, 18900953, 19127216, 18934139, 19844362
19335268, 18951113, 18899600, 18851012, 19149476, 19517835, 19428756
19183802, 19013966, 19051391, 19690653, 19195735, 19355146, 19001684
19509898, 19053182, 19644505, 19593769, 19610001, 19475588, 19353057
18957085, 19279106, 19270227, 19201087, 19184398, 19649858, 19450090
19502657, 19859183, 19557156, 18877486, 19528981, 18510745, 18915417
19134464, 19060056, 18955907

Patch level status of Cluster nodes :

Patching Level Nodes
————– —–
2888253033 rac1,rac2

——————————————————————————–

OPatch succeeded.

Oracle 12c Partitioning New Features

$
0
0

Online Move Partition

In Oracle 12c we can now move as well as compress partitions online while DML transactions on the partitioned table are in progress.

In earlier versions we would get an error like the one shown below if we attempted to move a partition while a DML statement on the partitioned table was in progress.

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

This is tied in to the 12c new feature related to Information Lifecycle Management where tables (and partitions) can be moved to low cost storage and/or compressed as part of an ILM policy. So we would not like to impact any DML statements which are in progress when the partitions are being moved or compressed – hence the online feature.

Another feature in 12c is that this online partition movement will not make the associated partitioned indexes left in an unusable state. The UPDATE INDEXES ONLINE clause will maintain the global and local indexes on the table.

SQL> ALTER TABLE sales MOVE PARTITION sales_q2_1998 TABLESPACE users
2  UPDATE INDEXES ONLINE;

Table altered.

 

Interval Reference Partitioning

In Oracle 11g Interval as well as Reference partitioning methods were introduced. In 12c we take this one step further and combine both those partitioning methods into one. So we can now have a child table to be referenced partitioned based on a parent table which has interval partitioning defined for it.

So two things to keep in mind.

Whenever an interval partition is created in the parent table a partition is also created in the referenced child table and the  partition name inherited from the parent table.

Partitions in the child table corresponding to partitions in the parent table are created when rows are inserted into the child table.

Let us look an example using the classic ORDERS and ORDER_ITEMS table which have a parent-child relationship and the parent ORDERS table has been interval partitioned.

CREATE TABLE "OE"."ORDERS_PART"
 (    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"ORDER_DATE" TIMESTAMP (6)  CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ENABLE,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6,0) ,
"ORDER_STATUS" NUMBER(2,0),
"ORDER_TOTAL" NUMBER(8,2),
"SALES_REP_ID" NUMBER(6,0),
"PROMOTION_ID" NUMBER(6,0),
CONSTRAINT ORDERS_PART_pk PRIMARY KEY (ORDER_ID)
)
PARTITION BY RANGE (ORDER_DATE)
INTERVAL (NUMTOYMINTERVAL(1,'YEAR'))
(PARTITION P_2006 VALUES LESS THAN (TIMESTAMP'2007-01-01 00:00:00 +00:00'),
PARTITION P_2007 VALUES LESS THAN (TIMESTAMP'2008-01-01 00:00:00 +00:00'),
PARTITION P_2008 VALUES LESS THAN (TIMESTAMP'2009-01-01 00:00:00 +00:00')
)
;

CREATE TABLE "OE"."ORDER_ITEMS_PART"
(    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"LINE_ITEM_ID" NUMBER(3,0) NOT NULL ENABLE,
"PRODUCT_ID" NUMBER(6,0) NOT NULL ENABLE,
"UNIT_PRICE" NUMBER(8,2),
"QUANTITY" NUMBER(8,0),
CONSTRAINT "ORDER_ITEMS_PART_FK" FOREIGN KEY ("ORDER_ID")
REFERENCES "OE"."ORDERS_PART" ("ORDER_ID") ON DELETE CASCADE )
PARTITION BY REFERENCE (ORDER_ITEMS_PART_FK)
;

Note the partitions in the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We can see that the child table has inherited the same partitions from the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We now insert a new row into the table which leads to the creation of a new partition automatically

SQL> INSERT INTO ORDERS_PART
  2   VALUES
  3   (9999,'17-MAR-15 01.00.00.000000 PM', 'DIRECT',147,5,1000,163,NULL);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

Note at this point the child table still has only 3 partitions and a new partition corresponding to the parent table will only be created when rows are inserted into the child table.

We now insert some rows into the child table – note that the row insertions leads to a new partition being created in the child table corresponding to the parent table.

SQL> INSERT INTO ORDER_ITEMS_PART
  2  VALUES
  3  (9999,1,2289,10,100);

1 row created.

SQL> INSERT INTO ORDER_ITEMS_PART
  2   VALUES
  3  (9999,2,2268,500,1);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

TRUNCATE CASCADE

In Oracle 12c we can add the CASCADE option to the TRUNCATE TABLE or ALTER TABLE TRUNCATE PARTITION commands.

The CASCADE option will truncate all child tables which reference the parent table and also where the referential constraint has been created with the ON DELETE CASCADE option.

The TRUNCATE CASCADE when used at the partition level in a reference partition model will also cascade to the partitions in the child table as shown in the example below.

SQL> alter table orders_part truncate partition SYS_P301 cascade;

Table truncated.


SQL> select count(*) from orders_part partition (SYS_P301);

  COUNT(*)
----------
         0

SQL>  select count(*) from order_items_part partition (SYS_P301);

  COUNT(*)
----------
         0

Multi-Partition Maintenance Operations

In Oracle 12c we can add, truncate or drop multiple partitions as part of a single operation.

In versions prior to 12c, the SPLIT and MERGE PARTITION operations could only be carried out on two partitions at a time. If we had a table with 10 partitions which say we needed to merge, we had to issue 9 separate DDL statements

Now with a single command we can roll out data into smaller partitions or roll up data into a larger partition.

CREATE TABLE sales
( prod_id       NUMBER(6)
, cust_id       NUMBER
, time_id       DATE
, channel_id    CHAR(1)
, promo_id      NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold   NUMBER(10,2)
)
PARTITION BY RANGE (time_id)
( PARTITION sales_q1_2014 VALUES LESS THAN (TO_DATE('01-APR-2014','dd-MON-yyyy'))
, PARTITION sales_q2_2014 VALUES LESS THAN (TO_DATE('01-JUL-2014','dd-MON-yyyy'))
, PARTITION sales_q3_2014 VALUES LESS THAN (TO_DATE('01-OCT-2014','dd-MON-yyyy'))
, PARTITION sales_q4_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy'))
);


ALTER TABLE sales ADD
PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
PARTITION sales_q4_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy'));


SQL>  ALTER TABLE sales MERGE PARTITIONS sales_q1_2015,sales_q2_2015,sales_q3_2015,sales_q4_2015  INTO PARTITION sales_2015;

Table altered.

SQL>  ALTER TABLE sales SPLIT PARTITION sales_2015 INTO
  2  (PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
  3  PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
  4  PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
  5  PARTITION sales_q4_2015);

Table altered.

Partial Indexing

In Oracle 12c we can now have a case where only certain partitions of the table are indexed while the other partitions do not have any indexes. For example we may want the recent partitions which are subject to lots of OLTP type operations to not have any indexes in order to speed up insert activity while the older partitions of the table are subject to DSS type queries and would benefit from indexing.

We can turn indexing on or off at the table level and then enable or disable it selectively at the partition level.

Have a look at the example below.

CREATE TABLE "SH"."SALES_12C"
(
"PROD_ID" NUMBER NOT NULL ENABLE,
"CUST_ID" NUMBER NOT NULL ENABLE,
"TIME_ID" DATE NOT NULL ENABLE,
"CHANNEL_ID" NUMBER NOT NULL ENABLE,
"PROMO_ID" NUMBER NOT NULL ENABLE,
"QUANTITY_SOLD" NUMBER(10,2) NOT NULL ENABLE,
"AMOUNT_SOLD" NUMBER(10,2) NOT NULL ENABLE
) 
TABLESPACE "EXAMPLE"
INDEXING OFF
PARTITION BY RANGE ("TIME_ID")
(PARTITION "SALES_1995"  VALUES LESS THAN (TO_DATE(' 1996-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1996"  VALUES LESS THAN (TO_DATE(' 1997-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1997"  VALUES LESS THAN (TO_DATE(' 1998-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1998"  VALUES LESS THAN (TO_DATE(' 1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1999"  VALUES LESS THAN (TO_DATE(' 2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_2000"  VALUES LESS THAN (TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2001"  VALUES LESS THAN (TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2002"  VALUES LESS THAN (TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON
 )
;

Create a local partitioned index on the table and note the size of the local index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL;

Index created.


SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                32

We drop the index and create the same index, but this time as a partial index. Since the index has only been created on a few partitions of the table and not the entire table, it is half the size of the original index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL INDEXING PARTIAL;

Index created.

SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                16

We can see that for the partitions where indexing is not enabled, the index has been created as UNUSABLE.

SQL> SELECT PARTITION_NAME,STATUS FROM USER_IND_PARTITIONS WHERE INDEX_NAME='SALES_12C_IND';

PARTITION_NAME                 STATUS
------------------------------ --------
SALES_2002                     USABLE
SALES_2001                     USABLE
SALES_2000                     USABLE
SALES_1999                     UNUSABLE
SALES_1998                     UNUSABLE
SALES_1997                     UNUSABLE
SALES_1996                     UNUSABLE
SALES_1995                     UNUSABLE

Note the difference in the EXPLAIN PLAN between two queries – which access different partitions of the same table and in one case use the local partial index and in the other case performs a full table scan.

SQL>  EXPLAIN PLAN FOR
  2  SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id <'01-JAN-97'; Explained. SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2557626605

-------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |           |     1 |    11 |  1925   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |           |     1 |    11 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR|           |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |
|*  3 |    TABLE ACCESS FULL      | SALES_12C |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |





SQL>  EXPLAIN PLAN FOR
  2   SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id='01-JAN-97';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2794067059

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |               |     1 |    22 |     2   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                                |               |     1 |    22 |            |          |       |       |
|   2 |   VIEW                                         | VW_TE_2       |     2 |    26 |     2   (0)| 00:00:01 |       |       |
|   3 |    UNION-ALL                                   |               |       |       |            |          |       |       |
|*  4 |     FILTER                                     |               |       |       |            |          |       |       |
|   5 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES_12C     |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  7 |        INDEX RANGE SCAN                        | SALES_12C_IND |     1 |       |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  8 |     FILTER                                     |               |       |       |            |          |       |       |
|   9 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|* 10 |       TABLE ACCESS FULL                        | SALES_12C     |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|


--------------------------------------------------------------------------------------------------------------------------------

Note the new columns INDEXING and DEF_INDEXING in the data dictionary views

SQL> select def_indexing from user_part_tables where table_name='SALES_12C';

DEF
---
OFF


SQL> select indexing from user_indexes where index_name='SALES_12C_IND';

INDEXIN
-------
PARTIAL

Asynchronous Global Index Maintenance

In earlier versions operations like TRUNCATE or DROP PARTITION on even a single partition would render the global indexes unusable and would require the indexes to be rebuilt before the application could use the indexes.

Now when we issue the same DROP or TRUNCATE partition commands we can use the UPDATE INDEXES clause and this maintains the global indexes leaving them in a USABLE state.

The global index maintenance is now deferred and is performed by a DBMS_SCHEDULER job called SYS.PMO_DEFERRED_GIDX_MAINT_JOB which is scheduled to run at 2.00 AM on a daily basis.

We can also use the DBMS_PART package which has the CLEANUP_GIDX procedure which cleans up the global indexes.

A new column ORPHANED_ENTRIES in the DBA|USER|ALL_INDEXES view keeps a track of the global index and specifies if the global index partition contains any stale entries caused by the DROP/TRUNCATE PARTITION operation.

Let us look at an example of the same. Note the important point that the global index is left in a USABLE state even after we perform a TRUNCATE operation on the partitioned table.

SQL>  alter table sales_12c truncate partition SALES_2000 UPDATE INDEXES;

Table truncated.

SQL> select distinct status from user_ind_partitions;

STATUS
--------
USABLE


SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P348                       YES
SYS_P347                       YES
SYS_P346                       YES
SYS_P345                       YES
SYS_P344                       YES
SYS_P343                       YES
SYS_P342                       YES
SYS_P341                       YES



SQL> exec dbms_part.cleanup_gidx('SH','SALES_12C');

PL/SQL procedure successfully completed.

SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P341                       NO
SYS_P342                       NO
SYS_P343                       NO
SYS_P344                       NO
SYS_P345                       NO
SYS_P346                       NO
SYS_P347                       NO
SYS_P348                       NO

12c Multitenancy Backup and Recovery

$
0
0

Here are a few examples of backup and recovery in an Oracle 12c multitenant environment with Container and Pluggable databases involved.

The first thing to keep in mind is the structure of a 12c Container and Pluggable database. There is only one set of control files and redo log files and that is at the container level. So the same principle applies to the archived redo log files as well.

Individual pluggable databases do not have redo log files or control files – but they can have individual temporary tablespace tempfiles.

Backup can be performed at the container level.

Single RMAN backup database command will backup the root container database as well as all the pluggable databases.

RMAN> backup database;

Starting backup at 02-APR-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=31 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/app/oracle/oradata/condb1/sysaux01.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/condb1/system01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/condb1/undotbs01.dbf
input datafile file number=00016 name=/u01/app/oracle/oradata/condb1/ggs_data01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/condb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
input datafile file number=00011 name=/u01/app/oracle/oradata/condb1/pdb1/example01.dbf
input datafile file number=00008 name=/u01/app/oracle/oradata/condb1/pdb1/system01.dbf
input datafile file number=00010 name=/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u01/app/oracle/oradata/condb1/pdb2/sysaux01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/condb1/pdb2/example01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/condb1/pdb2/system01.dbf
input datafile file number=00014 name=/u01/app/oracle/oradata/condb1/pdb2/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EAA0B10062AA3A41E0438D15060AC71B/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpor93z_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/app/oracle/oradata/condb1/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/condb1/pdbseed/system01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA792426F4762CDBE0438D15060A3359/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpos2bl_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 02-APR-14

Starting Control File and SPFILE Autobackup at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843812283_9mposw73_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 02-APR-14

Backups can also be performed at the pluggable database level. Note control file which is backed up in this case is at the container database level.

In this we have connected via RMAN to the container database and are backing up one of the pluggable databases.

RMAN> backup pluggable  database pdb2;

Starting backup at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u01/app/oracle/oradata/condb1/pdb2/sysaux01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/condb1/pdb2/example01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/condb1/pdb2/system01.dbf
input datafile file number=00014 name=/u01/app/oracle/oradata/condb1/pdb2/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EAA0B10062AA3A41E0438D15060AC71B/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T120428_9mq32f70_.bkp tag=TAG20140402T120428 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 02-APR-14

Starting Control File and SPFILE Autobackup at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843825894_9mq336jk_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 02-APR-14

We can also use RMAN to connect to an individual pluggable database instead of the container database.

$ rman target sys/syspasswd@pdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 08:48:30 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
5       Full    905.24M    DISK        00:00:22     02-APR-14
        BP Key: 5   Status: AVAILABLE  Compressed: NO  Tag: TAG20140402T081602
        Piece Name: /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
  List of Datafiles in backup set 5
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  8       Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
  9       Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
  10      Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
  11      Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/example01.dbf

Loss of Tempfile at pluggable database level

Temp file is automatically created when the pluggable database is closed and opened.

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf

SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf


SQL> conn sys as sysdba
Enter password:
Connected.
SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> !ls /u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf
/u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf

Loss of Non-System data file at pluggable database level

Online recovery of SYSAUX tablespace

SQL> conn sys/syspasswd@pdb1 as sysdba
Connected.

SQL>  select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/condb1/undotbs01.dbf
/u01/app/oracle/oradata/condb1/pdb1/system01.dbf
/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/condb1/pdb1/example01.dbf

SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf

SQL> alter tablespace sysaux offline;

Tablespace altered.

$ rman target sys/syspasswd@pdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 10:31:30 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> restore tablespace sysaux;

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 02-APR-14

RMAN> recover tablespace sysaux;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 02-APR-14

RMAN> alter tablespace sysaux online;

Statement processed

Loss of SYSTEM tablespace datafile at pluggable database level

Note – online pluggable database recovery cannot be performed.

The entire container database has to be shut down and mounted and pluggable database recovered


SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/system01.dbf


SQL> alter session set container=pdb1;

Session altered.


SQL> shutdown abort
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/app/oracle/oradata/condb1/pdb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> alter pluggable database pdb1 close;
alter pluggable database pdb1 close
*
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/app/oracle/oradata/condb1/pdb1/system01.dbf'
ORA-27041: unable to open file

To recover the pluggable database we need to connect to the container database, shutdown the container database (this will shutdown all other pluggable databases) , mount the container database and then recover the pluggable database.


SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2290416 bytes
Variable Size            1207962896 bytes
Database Buffers          922746880 bytes
Redo Buffers                4886528 bytes
Database mounted.

RMAN> restore tablespace pdb1:system;

Starting restore at 02-APR-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-APR-14

RMAN> recover tablespace pdb1:system;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 02-APR-14

RMAN> alter database open;

Statement processed


RMAN> alter pluggable database all open read write;

Statement processed

Loss of SYSTEM data file at the Container database level

Note – If we lose the container database SYSTEM datafile we cannot connect to any of the pluggable databases as well.

We have to shutdown abort the container database and then mount the container database and perform offline recovery of the SYSTEM tablespace


SQL> !rm /u01/app/oracle/oradata/condb1/system01.dbf

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from dba_objects;
select count(*) from dba_objects
                     *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


SQL> alter session set container=pdb1;
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00604: error occurred at recursive SQL level 2
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> shutdown abort
ORACLE instance shut down.

$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 10:43:29 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount;

Oracle instance started
database mounted

Total System Global Area    2137886720 bytes

Fixed Size                     2290416 bytes
Variable Size               1207962896 bytes
Database Buffers             922746880 bytes
Redo Buffers                   4886528 bytes

RMAN> restore tablespace system;

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/condb1/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 02-APR-14

RMAN> recover tablespace system;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 175 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc
archived log for thread 1 with sequence 176 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc
archived log for thread 1 with sequence 177 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc thread=1 sequence=175
media recovery complete, elapsed time: 00:00:01
Finished recover at 02-APR-14


RMAN> alter database open;

Statement processed

RMAN>


RMAN> alter pluggable database all open read write;

Statement processed

Point-in-time Recovery of Pluggable Database

Note – an auxiliary instance is created to perform the point in time recovery of the pluggable database.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    8388302


SQL> conn sh/sh@pdb1
Connected.

SQL> truncate table sales;

Table truncated.


$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 11:31:37 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> run {
2> set until scn  8388302;
3> restore pluggable database pdb1;
4> recover pluggable database pdb1;
5> alter pluggable database pdb1 open resetlogs;
6> }
executing command: SET until clause

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/condb1/pdb1/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 02-APR-14

Starting recover at 02-APR-14
current log archived
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='zpDF'

initialization parameters used for automatic instance:
db_name=CONDB1
db_unique_name=zpDF_pitr_pdb1_CONDB1
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
#No auxiliary destination in use
enable_pluggable_database=true
_clone_one_pdb_recovery=true
control_files=/u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl
#No auxiliary parameter file used


starting up automatic instance CONDB1

Oracle instance started

Total System Global Area    1068937216 bytes

Fixed Size                     2296576 bytes
Variable Size                281019648 bytes
Database Buffers             780140544 bytes
Redo Buffers                   5480448 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
}
executing Memory Script

executing command: SET until clause

Starting restore at 02-APR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=75 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843822011_9mpz9w6k_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843822011_9mpz9w6k_.bkp tag=TAG20140402T110011
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl
Finished restore at 02-APR-14

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# switch to valid datafilecopies
switch clone datafile  8 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/system01.dbf";
switch clone datafile  9 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf";
switch clone datafile  10 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf";
switch clone datafile  11 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/example01.dbf";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  1 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf";
set newname for datafile  4 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf";
set newname for datafile  3 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf";
set newname for datafile  6 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf";
set newname for datafile  16 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf";
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 4, 3, 6, 16;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/system01.dbf

datafile 9 switched to datafile copy
input datafile copy RECID=8 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf

datafile 10 switched to datafile copy
input datafile copy RECID=9 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf

datafile 11 switched to datafile copy
input datafile copy RECID=10 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/example01.dbf

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 02-APR-14
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00016 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp

channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:55
Finished restore at 02-APR-14

datafile 1 switched to datafile copy
input datafile copy RECID=16 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=17 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=18 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=19 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=20 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  4 online";
sql clone "alter database datafile  3 online";
sql clone 'PDB1' "alter database datafile
 8 online";
sql clone 'PDB1' "alter database datafile
 9 online";
sql clone 'PDB1' "alter database datafile
 10 online";
sql clone 'PDB1' "alter database datafile
 11 online";
sql clone "alter database datafile  6 online";
sql clone "alter database datafile  16 online";
# recover pdb
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX", "USERS", "GGS_DATA" pluggable database
 'PDB1'   delete archivelog;
sql clone 'alter database open read only';
plsql <<>>;
plsql <<>>;
# shutdown clone before import
shutdown clone abort
plsql <<  'PDB1');
end; >>>;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  4 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  8 online

sql statement: alter database datafile  9 online

sql statement: alter database datafile  10 online

sql statement: alter database datafile  11 online

sql statement: alter database datafile  6 online

sql statement: alter database datafile  16 online

Starting recover at 02-APR-14
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 175 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc
archived log for thread 1 with sequence 176 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc
archived log for thread 1 with sequence 177 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc
archived log for thread 1 with sequence 178 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_178_9mpyf1gy_.arc
archived log for thread 1 with sequence 179 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_179_9mpys08g_.arc
archived log for thread 1 with sequence 180 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_180_9mpzn0jl_.arc
archived log for thread 1 with sequence 181 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_181_9mq17sc3_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc thread=1 sequence=175
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc thread=1 sequence=176
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc thread=1 sequence=177
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_178_9mpyf1gy_.arc thread=1 sequence=178
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_179_9mpys08g_.arc thread=1 sequence=179
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_180_9mpzn0jl_.arc thread=1 sequence=180
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_181_9mq17sc3_.arc thread=1 sequence=181
media recovery complete, elapsed time: 00:00:05
Finished recover at 02-APR-14

sql statement: alter database open read only



Oracle instance shut down


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf deleted
auxiliary instance file /u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl deleted
Finished recover at 02-APR-14

Statement processed

RMAN>

SQL> conn sh/sh@pdb1
Connected.

SQL> select count(*) from sales;

  COUNT(*)
----------
    918843


SQL> select con_id,  DB_INCARNATION#, PDB_INCARNATION# , INCARNATION_TIME from v$pdb_incarnation order by con_id;

    CON_ID DB_INCARNATION# PDB_INCARNATION# INCARNATI
---------- --------------- ---------------- ---------
         3               2                0 06-NOV-13
         3               2                1 02-APR-14

SQL> conn / as sysdba
Connected.

SQL> /

    CON_ID DB_INCARNATION# PDB_INCARNATION# INCARNATI
---------- --------------- ---------------- ---------
         1               2                0 06-NOV-13
         1               1                0 24-MAY-13
         2               2                0 06-NOV-13
         2               1                0 24-MAY-13
         3               2                1 02-APR-14
         3               2                0 06-NOV-13
         4               2                0 06-NOV-13
         4               1                0 24-MAY-13

8 rows selected.

Note – FLASHBACK DATABASE has to be enabled at the Container database level – cannot be performed at the pluggable database level.

SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-03001: unimplemented feature


SQL> conn / as sysdba
Connected.
SQL> alter database flashback on;

Database altered.

SQL>  select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

Flashback database also cannot be performed at the pluggable database level.

SQL> flashback database pdb3 to scn  26541795;
flashback database pdb3 to scn  26541795
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

12.1.0.2 Multitenant Database New Features

$
0
0

Heres a quick look at some of the new features introduced in 12.1.0.2 around Pluggable and Container databases.

PDB CONTAINERS Clause

Using the CONTAINERS clause, from the root container we can issue a query which selects or aggregates data across multiple pluggable databases.

For example each pluggable data can contain data for a specific geographic region and we can issue a query from the root container database which will aggregate the data obtained from all individual regions.

The requirement is that we have to create an empty table in the root container with just the structure of the tables contained in the PDB’s.

In this example we have a table called MYOBJECTS and the pluggable databases are DEV1 and DEV2.

Each pluggable database has its own copy of the MYOBJECTS table.

We have a common user C##USER who owns the MYOBJECTS table in all the pluggable databases.


SQL> alter session set container=dev1;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2387

SQL> alter session set container=dev2;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2350


Now connect to the root container. We are able to issue a query which aggregates data from both Pluggable databases – DEV1 and DEV2.

Note the root container has a table also called MYOBJECTS – but with no rows.



SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> select con_id, name from v$pdbs;

    CON_ID NAME
---------- ------------------------------
         2 PDB$SEED
         3 DEV1
         4 DEV2


SQL> select count(*) from myobjects;

  COUNT(*)
----------
         0


SQL> select count(*) from containers ( myobjects)
  2  where object_type='TABLE'
  3  and con_id in (3,4);

  COUNT(*)
----------
      4737

PDB Subset Cloning

12.1.0.2 extends database cloning where we can just clone a subset of a source database. The USER_TABLESPACES clause allows us to specify which tablespaces need to be available in the new cloned pluggable database

In this example the source Pluggable Database (DEV1) has two tablespaces with application data located in two tablespaces – USERS and TEST_DATA.

The requirement is to create a clone of the DEV1 pluggable database, but the target database only requires the tables contained in the TEST_DATA tablespace.

This would be useful in a case where we are migrating data from a non-CDB database which contains multiple schemas and we perform some kind of schema consolidation where each schema is self-contained in its own pluggable database.

Note the MYOBJECTS table is contained in the USERS tablespace and we are creating a new tablespace TEST_DATA which will contain the MYTABLES table. The cloned database only requires the TEST_DATA tablespace

SQL> alter session set container=dev1;

Session altered.


SQL> select tablespace_name from dba_tables where table_name='MYOBJECTS';

TABLESPACE_NAME
------------------------------
USERS

SQL> select count(*) from system.myobjects;

  COUNT(*)
----------
     90922


SQL> create tablespace test_data
  2  datafile
  3  '/oradata/cdb1/dev1/dev1_test_data01.dbf'
  4  size 50m;

Tablespace created.

SQL> create table system.mytables
  2  tablespace test_data
  3  as select * from dba_tables;

Table created.

SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                TABLESPACE_NAME
---------------------------------------- ------------------------------
/oradata/cdb1/dev1/system01.dbf          SYSTEM
/oradata/cdb1/dev1/sysaux01.dbf          SYSAUX
/oradata/cdb1/dev1/dev1_users01.dbf      USERS
/oradata/cdb1/dev1/dev1_test_data01.dbf  TEST_DATA

We now will create the clone database – DEV3 using DEV1 as the source. Note the USER_TABLESPACES clause which defines the tablespaces which we want to be part of the cloned pluggable database.


SQL> ! mkdir /oradata/cdb1/dev3/

SQL> conn / as sysdba
Connected.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE PLUGGABLE DATABASE dev3 FROM dev1
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev3/')
USER_TABLESPACES=('TEST_DATA')  ;

Pluggable database created.

SQL> alter pluggable database dev3 open;

Pluggable database altered.

If we connect to the DEV3 database we can see a list of the data files which the PDB comprises of .

We can see that the datafile which belongs to the USERS tablespace has the MISSING keyword included in its name. While wed can now select from tables which were contained in the TEST_DATA tablespace on source (like MYTABLES), we cannot access tables (obviously) which existed in other tablespaces which were not part of the USER_TABLESPACES clause of the CREATE PLUGGABLE DATABASE command like MYOBJECTS.

To clean up the database we can now drop the other tablespaces like USERS which are not required in the cloned database.


SQL> alter session set container=dev3;

Session altered.


select file_name, tablespace_name from dba_data_files;


FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev3/system01.dbf                                        SYSTEM
/oradata/cdb1/dev3/sysaux01.dbf                                        SYSAUX
/u01/app/oracle/product/12.1.0.2/dbs/MISSING00017                      USERS
/oradata/cdb1/dev3/dev1_test_data01.dbf                                TEST_DATA


SQL> select count(*) from system.mytables;

  COUNT(*)
----------
      2339

SQL> select count(*) from system.myobjects;
select count(*) from system.myobjects
                            *
ERROR at line 1:
ORA-00376: file 21 cannot be read at this time
ORA-01111: name for data file 21 is unknown - rename to correct file
ORA-01110: data file 21: '/u01/app/oracle/product/12.1.0.2/dbs/MISSING00021'



SQL> alter database default tablespace test_data;

Database altered.

SQL> drop tablespace users including contents and datafiles;

Tablespace dropped.

PDB Metadata Clone

There is an option to also create a clone of a pluggable database with just the structure or definition of the source database but without any table or index user or application data.

This feature can help in the rapid provisioning of test or development environments where just the structure of the production database is required and after the pluggable database has been created it will be populated with some test data.

In this example we are creating the DEV4 pluggable database which just has the data dictionary and metadata of the source DEV1 database. Note the use of the NO DATA clause.


SQL> conn / as sysdba
Connected.

SQL> ! mkdir /oradata/cdb1/dev4

SQL> CREATE PLUGGABLE DATABASE dev4 FROM dev1
  2  FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev4/')
  3  NO DATA;

Pluggable database created.

SQL> alter pluggable database dev4 open;

Pluggable database altered.

SQL> alter session set container=dev4;

Session altered.

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
         0

SQL> select count(*) from system.mytables;

  COUNT(*)
----------
         0


SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev4/system01.dbf                                        SYSTEM
/oradata/cdb1/dev4/sysaux01.dbf                                        SYSAUX
/oradata/cdb1/dev4/dev1_users01.dbf                                    USERS
/oradata/cdb1/dev4/dev1_test_data01.dbf                                TEST_DATA


PDB State Management Across CDB Restart

In Oracle 12c version 12.1.0.1, when we started a CDB, by default all the PDB’s except the seed we left in MOUNTED state and we had to issue an explicit ALTER PLUGGABLE DATABASE ALL OPEN command to open all the PDB’s.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.


SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           MOUNTED
DEV2                           MOUNTED
DEV3                           MOUNTED
DEV4                           MOUNTED

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

Now in 12.1.0.2 using the SAVE STATE command we can preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.

So if a PDB was open in READ WRITE mode when a CDB was shut down, when we restart the CDB all the PDB’s which were in READ WRITE mode when the CDB was shut down will be opened in the same READ WRITE mode automatically without the DBA having to execute the ALTER PLUGGABLE DATABASE ALL OPEN command which was required in the earlier 12c version.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database all save state;

Pluggable database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

PDB Remote Clone

In 12.1.0.2 we can now create a PDB from a non-CDB source by cloning it over a database link. This feature further enhances the rapid provisioining of pluggable databases.

In non-CDB:

SQL> grant create pluggable database to system;

Grant succeeded.

In CDB root – create a database link to the non-CDB:

SQL> create database link non_cdb_link
  2  connect to system identified by oracle using 'upgr';

Database link created.

SQL> select * from dual@non_cdb_link;

D
-
X

Now shut down the non-CDB and open it in READ ONLY mode.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  826277888 bytes
Fixed Size                  2929792 bytes
Variable Size             322964352 bytes
Database Buffers          494927872 bytes
Redo Buffers                5455872 bytes
Database mounted.

SQL> alter database open read only;

Database altered.


Create the pluggable database DEV5 from the non-CDB source using the database link we just created.


CREATE PLUGGABLE DATABASE dev5 FROM dev1@non_cdb_link
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev5/');

After the PBD has been created we will now need to run the noncdb_to_pdb.sql script and then open the PDB.


SQL> alter session set container=dev5;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


SQL> alter pluggable database open;

Pluggable database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/cdb1/undotbs01.dbf
/oradata/cdb1/dev5/system01.dbf
/oradata/cdb1/dev5/sysaux01.dbf
/oradata/cdb1/dev5/users01.dbf
/oradata/cdb1/dev5/aq01.dbf


Oracle Goldengate 12c on DBFS for RAC and Exadata

$
0
0

Let us take a look at the process of configuring Goldengate 12c to work in an Oracle 12c Grid Infrastructure RAC or Exadata environment using DBFS on Linux x86-64.

Simply put the Oracle Database File System (DBFS) is a standard file system interface on top of files and directories that are stored in database tables as LOBs.

In one of my earlier posts we had seen how we can configure Goldengate in an Oracle 11gR2 RAC environment using ACFS as the shared location.

Until recently Exadata did not support using ACFS but ACFS is now supported on version 12.1.0.2 of the RAC Grid Infrastructure.

In this post we will see how the Oracle DBFS (Database File System) will be setup and configured and used as the shared location for some of the key Goldengate files like the trail files and checkpoint files.

In summary the broad steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

Install and Configure FUSE

Using the following command check if FUSE has been installed:

lsmod | grep fuse

FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.

Using Yum:

yum install kernel-devel
yum install fuse fuse-libs

Via RPM’s:

If installing from the media, then these are the RPM’s which are required:

kernel-devel-2.6.32-358.el6.x86_64.rpm
fuse-2.8.3-4.el6.x86_64.rpm
fuse-devel-2.8.3-4.el6.x86_64.rpm
fuse-libs-2.8.3-4.el6.x86_64.rpm

A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.

For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.

# /usr/sbin/groupadd fuse

# usermod -G dba,fuse oracle

One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.

The /etc/fuse.conf needs to have the “user_allow_other” option as shown below.

$ # cat /etc/fuse.conf
user_allow_other

chmod 644 /etc/fuse.conf

Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.

Create the DBFS tablespaces and mount the DBFS

If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.

If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.

If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.

Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12 hours of trail files. So we need to keep that in mind when we create the ASM disk group or create the DBFS tablespace.

CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

Create the DBFS user

CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd 
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;

GRANT create session, create table, create view, 
create procedure, dbfs_role TO dbfs_user; 


Create the DBFS Filesystem

To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.

For example:

cd $ORACLE_HOME/rdbms/admin 

sqlplus dbfs_user/dbfs_pswd 


SQL> @dbfs_create_filesystem dbfs_ogg_big  gg_source

OR

SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big  gg_source
      nocompress nodeduplicate noencrypt non-partition 

Where …
o dbfs_ogg_big: tablespace for the DBFS database objects
o gg_source: filesystem name, this can be any string and will appear as a directory under the mount point

If we were configuring DBFS on the Goldengate target or Replicat side of things,it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.

For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.

The example shown below illustrates how we can modify the LOB attributes.(assuming we have created two DBFS tablespaces)

SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%'; 

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             NO        YES



SQL> ALTER TABLE dbfs_user.T_DBFS_SM 
     MODIFY LOB (FILEDATA) (CACHE LOGGING); 


SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%';  

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             YES       YES


As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).


# cd /mnt 
# mkdir DBFS 
# chown oracle:oinstall DBFS/

Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).

In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.

Node A

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl1)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1')
      )
  (CONNECT_DATA=(SID=orcl1))
)

Node B

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl2)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2')
      )
  (CONNECT_DATA=(SID=orcl2))
)


 

We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.

In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.

For example:


echo dbfs_pswd > passwd.txt 

nohup $ORACLE_HOME/bin/dbfs_client dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &

After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.


$  df -h |grep dbfs

dbfs-dbfs_user@:/     4.9G   11M  4.9G   1% /mnt/dbfs

The command used to unmount the DBFS filesystem would be:

fusermount -u 

Create links from Oracle Goldengate software directories to DBFS

Create the following directories on DBFS

$ mkdir /mnt/gg_source/goldengate 
$ cd /mnt/gg_source/goldengate 
$ mkdir dirchk
$ mkdir dirpcs 
$ mkdir dirprm
$ mkdir dirdat
$ mkdir BR

Make the symbolic links from Goldengate software directories to DBFS

cd /u03/app/oracle/goldengate
mv dirchk dirchk.old
mv dirdat dirdat.old
mv dirpcs dirpcs.old
mv dirprm dirprm.old
mv BR BR.old
ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk
ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat
ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm
ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs
ln -s /mnt/dbfs/gg_source/goldengate/BR BR

For example :

[oracle@rac2 goldengate]$ ls -l dirdat
lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat

Also copy the jagent.prm file which comes out of the box located in the dirprm directory

[oracle@rac2 dirprm.old]$ pwd
/u03/app/oracle/goldengate/dirprm.old
[oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm

Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory

BR BRDIR /mnt/dbfs/gg_source/goldengate/BR
 

Create the Application VIP

Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.

Run the appvipcfg command to create the Application VIP as shown in the example below.


$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root

We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.

$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet

NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

As root give the Oracle Database software owner permissions to start the VIP.

$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x 

As the Oracle database software owner start the VIP

$GRID_HOME/bin/crsctl start resource gg_vip_source

Verify the status of the Application VIP


$GRID_HOME/bin/crsctl status resource gg_vip_source

 

Download the mount-dbfs.sh script from MOS

Download the mount-dbfs.sh script from MOS note 1054431.1.

Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:

# dos2unix /tmp/mount-dbfs.sh

Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.

Next make changes to the environment variable settings section of the mouny-dbfs.sh script as required. These are the changes I made to the script.

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl

### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1

### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user

### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin

### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl

Create the DBFS Cluster Resource

Before creating the Cluster Resource for DBFS,test the mount-dbfs.sh script

$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE

$ ./mount-dbfs.sh stop

As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.

This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.

Edit the following variables in the script as shown below:

ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME

#!/bin/bash
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DEPNAME=ora.orcl.db
ORACLE_HOME=/u01/app/12.1.0.2/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\
STOP_DEPENDENCIES='hard($DEPNAME)',\
SCRIPT_TIMEOUT=300"

Execute the script – it should produce no output.

./ add-dbfs-resource.sh

 

Download and Install the Oracle Grid Infrastructure Bundled Agent

Starting with Oracle 11.2.0.3 on 64-bit Linux,out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.

The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.

The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.

Download the latest version of the agent (6.1) from the URL below:

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

The downloaded file will be xagpack_6.zip.

There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory. We need to install the new bundled agent in a separate directory and ensure the $PATH includes

Unzip the xagpack_6.zip in a temporary location on one of the RAC nodes.

To install the Oracle Grid Infrastructure Agents we run the xagsetup.sh script as shown below:

xagsetup.sh --install --directory [{–nodes | –all_nodes}]

Register Goldengate with the bundled agents using agctl utility

Using agctl utility create the GoldenGate configuration.

Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.

/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \
--instance_type source \
--nodes rac1,rac2 \
--vip_name gg_vip_source \
--filesystems dbfs_mount --databases ora.orcl.db \
--oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \
--monitor_extracts ext1,extdp1
 

Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.

Let us look at some examples of using agctl.

Check the Status – note the DBFS filesystem is also mounted currently on node rac2

$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2


$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

Stop the Goldengate environment

$ ./agctl stop goldengate gg_source 
$ ./agctl status goldengate gg_source
Goldengate  instance ' gg_source ' is not running

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1        00:00:03      00:01:19
EXTRACT     STOPPED     EXTDP1      00:00:00      00:01:18

Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.

$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac1

GGSCI (rac2.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (rac1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:09      00:00:06
EXTRACT     RUNNING     EXTDP1      00:00:00      00:05:22

We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.

[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0

Lets test the whole thing!!

Now that we see that the Goldengate resources are running on node rac1,let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.

AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.

[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now

Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]#  (/dev/pts/0) at 19:45 ...

The system is going down for halt NOW!

Connect to the surviving node rac2 and check ……

[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:07      00:00:02
EXTRACT     RUNNING     EXTDP1      00:00:00      00:00:08

Check the Cluster Resource ….

oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------


Oracle 12c Pluggable Database Upgrade

$
0
0

Until very recently I had really believed the marketing hype and sales pitch about how in 12c database upgrades are so much faster and easier than earlier releases – just unplug the PDB from one container and plug it in to another container and bingo you have an upgraded database!

Partly true …. maybe about 20%!

As Mike Dietrich from Oracle Corp. has rightly pointed out on his great blog (http://blogs.oracle.com/upgrade),it is not as straight forward as pointed out in slides seen I am sure by many of us at various Oracle conferences showcasing Oracle database 12c.

I tested out the upgrade of a PDB from version 12.1.0.1 to the latest 12c version 12.1.0.2 and here are the steps taken.

Note: If we are upgrading the entire CDB and all the PDB’s the steps would be different.

In this case we are upgrading just of the pluggable databases to a higher database software version.
 

Run the preupgrd.sql script and pre-upgrade fixup script

 
Connect to the 12.1.0.1 target database and run the preupgrd.sql script.

The source container database is cdb3 and the PDB which we are upgrading is pdb_gavin.

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb3

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 is /u01/app/oracle
[oracle@edmbr52p5 ~]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Aug 21 10:49:21 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter session set container=pdb_gavin;

Session altered.

SQL> @?/rdbms/admin/preupgrd.sql
Loading Pre-Upgrade Package...
Executing Pre-Upgrade Checks...
Pre-Upgrade Checks Complete.
      ************************************************************

Results of the checks are located at:
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade.log

Pre-Upgrade Fixup Script (run in source database environment):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/postupgrade_fixups.sql

      ************************************************************

         Fixup scripts must be reviewed prior to being executed.

      ************************************************************

      ************************************************************
                   ====>> USER ACTION REQUIRED  <<====
      ************************************************************

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.


 1) Check Tag:    OLS_SYS_MOVE
    Check Summary: Check if SYSTEM.AUD$ needs to move to SYS.AUD$ before upgrade
    Fixup Summary:
     "Execute olspreupgrade.sql script prior to upgrade."
    +++ Source Database Manual Action Required +++

            You MUST resolve the above error prior to upgrade

      ************************************************************

The execution of the preupgrd.sql script will generate 3 separate files.

1)preupgrade.log
2)preupgrade_fixups.sql
3)postupgrade_fixups.sql

Let us examine the contents of the preupgrade.log file.

Oracle Database Pre-Upgrade Information Tool 08-21-2015 10:50:04
Script Version: 12.1.0.1.0 Build: 006
**********************************************************************
   Database Name:  CDB3
         Version:  12.1.0.1.0
      Compatible:  12.1.0.0.0
       Blocksize:  8192
        Platform:  Linux x86 64-bit
   Timezone file:  V18
**********************************************************************
                          [Renamed Parameters]
                     [No Renamed Parameters in use]
**********************************************************************
**********************************************************************
                    [Obsolete/Deprecated Parameters]
             [No Obsolete or Desupported Parameters in use]
**********************************************************************
                            [Component List]
**********************************************************************
--> Oracle Catalog Views                   [upgrade]  VALID
--> Oracle Packages and Types              [upgrade]  VALID
--> JServer JAVA Virtual Machine           [upgrade]  VALID
--> Oracle XDK for Java                    [upgrade]  VALID
--> Real Application Clusters              [upgrade]  OPTION OFF
--> Oracle Workspace Manager               [upgrade]  VALID
--> OLAP Analytic Workspace                [upgrade]  VALID
--> Oracle Label Security                  [upgrade]  VALID
--> Oracle Database Vault                  [upgrade]  VALID
--> Oracle Text                            [upgrade]  VALID
--> Oracle XML Database                    [upgrade]  VALID
--> Oracle Java Packages                   [upgrade]  VALID
--> Oracle Multimedia                      [upgrade]  VALID
--> Oracle Spatial                         [upgrade]  VALID
--> Oracle Application Express             [upgrade]  VALID
--> Oracle OLAP API                        [upgrade]  VALID
**********************************************************************
           [ Unsupported Upgrade: Tablespace Data Supressed ]
**********************************************************************
**********************************************************************
                          [Pre-Upgrade Checks]
**********************************************************************
ERROR: --> SYSTEM.AUD$ (audit records) Move

    An error occured retrieving a count from SYSTEM.AUD$
    This can happen when the table has already been cleaned up.
    The olspreupgrade.sql script should be re-executed.



WARNING: --> Existing DBMS_LDAP dependent objects

     Database contains schemas with objects dependent on DBMS_LDAP package.
     Refer to the Upgrade Guide for instructions to configure Network ACLs.
     USER APEX_040200 has dependent objects.


**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                   ************  Summary  ************

 1 ERROR exist that must be addressed prior to performing your upgrade.
 2 WARNINGS that Oracle suggests are addressed to improve database performance.
 0 INFORMATIONAL messages messages have been reported.

 After your database is upgraded and open in normal mode you must run
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 977512.1
                   ***********************************

So as part of the pre-upgrade preparation we execute :

SQL> @?/rdbms/admin/olspreupgrade.sql

and 

SQL>  EXECUTE dbms_stats.gather_dictionary_stats;

Unplug the PDB from the 12.1.0.1 Container Database

SQL>  alter session set container=CDB$ROOT;

Session altered.

SQL> alter pluggable database  pdb_gavin unplug into '/home/oracle/pdb_gavin.xml';

Pluggable database altered

Create the PDB in the 12.1.0.2 Container Database

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb2] ? cdb1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle

[oracle@edmb]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Aug 21 12:04:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT


SQL> create pluggable database pdb_gavin
  2   using '/home/oracle/pdb_gavin.xml'
  3  nocopy
  4  tempfile reuse;

Pluggable database created..

Upgrade the PDB to 12.1.0.2

After the pluggable database has been created in the 12.1.0.2 container, we will open it with the UPGRADE option in order to run the catupgrd.sql database upgrade script.

We can see that we receive some errors which we can ignore safely as we are in the middle of an upgrade to the PDB.

SQL> alter pluggable database pdb_gavin open upgrade;

Warning: PDB altered with errors.


SQL> select message, status from pdb_plug_in_violations where type like '%ERR%';

MESSAGE
--------------------------------------------------------------------------------
STATUS
---------
Character set mismatch: PDB character set US7ASCII. CDB character set AL32UTF8.
RESOLVED

PDB's version does not match CDB's version: PDB's version 12.1.0.1.0. CDB's vers
ion 12.1.0.2.0.
PENDING

We now run the catctl.pl perl script and we specify the PDB name (if we were upgrading multiple PDBs hee we would separate each PDB name with a comma) – not that we are also running the upgrade in parallel.

[oracle@edm ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@edm admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c "PDB_GAVIN" -n 4 -l /tmp catupgrd.sql

Argument list for [catctl.pl]
SQL Process Count     n = 4
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = 0
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrd_catcon_19456.lst
catcon: See /tmp/catupgrd*.log files for output generated by scripts
catcon: See /tmp/catupgrd_*.lst files for spool files, if any
Number of Cpus        = 8
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count     = 4

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

Starting
[/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl catctl.pl -c 'PDB_GAVIN' -n 2 -l /tmp -I -i pdb_gavin catupgrd.sql]

Argument list for [catctl.pl]
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = pdb_gavin
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 1

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrdpdb_gavin_catcon_19562.lst
catcon: See /tmp/catupgrdpdb_gavin*.log files for output generated by scripts
catcon: See /tmp/catupgrdpdb_gavin_*.lst files for spool files, if any
Number of Cpus        = 8
SQL PDB Process Count = 2
SQL Process Count     = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB_GAVIN] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 15s   PDB_GAVIN
Serial   Phase #: 1 Files: 5     Time: 107s  PDB_GAVIN
Restart  Phase #: 2 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 3 Files: 18    Time: 40s   PDB_GAVIN
Restart  Phase #: 4 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #: 5 Files: 5     Time: 43s   PDB_GAVIN
Serial   Phase #: 6 Files: 1     Time: 18s   PDB_GAVIN
Serial   Phase #: 7 Files: 4     Time: 11s   PDB_GAVIN
Restart  Phase #: 8 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 9 Files: 62    Time: 110s  PDB_GAVIN
Restart  Phase #:10 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:11 Files: 1     Time: 28s   PDB_GAVIN
Restart  Phase #:12 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:13 Files: 91    Time: 8s    PDB_GAVIN
Restart  Phase #:14 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:15 Files: 111   Time: 15s   PDB_GAVIN
Restart  Phase #:16 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:17 Files: 3     Time: 2s    PDB_GAVIN
Restart  Phase #:18 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:19 Files: 32    Time: 43s   PDB_GAVIN
Restart  Phase #:20 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:21 Files: 3     Time: 11s   PDB_GAVIN
Restart  Phase #:22 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:23 Files: 23    Time: 75s   PDB_GAVIN
Restart  Phase #:24 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:25 Files: 11    Time: 25s   PDB_GAVIN
Restart  Phase #:26 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:27 Files: 1     Time: 1s    PDB_GAVIN
Restart  Phase #:28 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:30 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:31 Files: 257   Time: 29s   PDB_GAVIN
Serial   Phase #:32 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:33 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:34 Files: 1     Time: 3s    PDB_GAVIN
Restart  Phase #:35 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:36 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:37 Files: 4     Time: 62s   PDB_GAVIN
Restart  Phase #:38 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:39 Files: 13    Time: 33s   PDB_GAVIN
Restart  Phase #:40 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:41 Files: 10    Time: 5s    PDB_GAVIN
Restart  Phase #:42 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:43 Files: 1     Time: 7s    PDB_GAVIN
Restart  Phase #:44 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:45 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:46 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:47 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:48 Files: 1     Time: 71s   PDB_GAVIN
Restart  Phase #:49 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:50 Files: 1     Time: 9s    PDB_GAVIN
Restart  Phase #:51 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:52 Files: 1     Time: 41s   PDB_GAVIN
Restart  Phase #:53 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:54 Files: 1     Time: 51s   PDB_GAVIN
Restart  Phase #:55 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:56 Files: 1     Time: 36s   PDB_GAVIN
Restart  Phase #:57 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:58 Files: 1     Time: 37s   PDB_GAVIN
Restart  Phase #:59 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:60 Files: 1     Time: 48s   PDB_GAVIN
Restart  Phase #:61 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:62 Files: 1     Time: 112s  PDB_GAVIN
Restart  Phase #:63 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:64 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_upgrade.log 2> /tmp/catupgrdpdb_gavin_datapatch_upgrade.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:66 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:68 Files: 1     Time: 12s   PDB_GAVIN
Serial   Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_normal.log 2> /tmp/catupgrdpdb_gavin_datapatch_normal.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:70 Files: 1     Time: 30s   PDB_GAVIN
Serial   Phase #:71 Files: 1     Time: 4s    PDB_GAVIN
Serial   Phase #:72 Files: 1     Time: 3s    PDB_GAVIN
Serial   Phase #:73 Files: 1     Time: 0s    PDB_GAVIN

Grand Total Time: 1155s PDB_GAVIN

LOG FILES: (catupgrdpdb_gavin*.log)

Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/cdb1/upgrade/upg_summary.log

Total Upgrade Time:          [0d:0h:19m:15s]

     Time: 1156s For PDB(s)

Grand Total Time: 1156s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:19m:16s]
[oracle@edmbr52p5 admin]$


Run the post upgrade steps

We then start the PDB and run the post-upgrade steps which includes recompiling all the invalid objects and also gathering fresh statistics on the fixed dictionary objects.

That completes the PDB upgrade – not quite a simple plug and unplug!!

SQL> startup;
Pluggable Database opened.


SQL> @?/rdbms/admin/utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-08-21 12:35:42

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-08-21 12:36:02

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 12:36:13
...Compiled 0 out of 3014 objects considered, 0 failed compilation 12:36:13
...271 packages
...263 package bodies
...452 tables
...11 functions
...16 procedures
...3 sequences
...457 triggers
...1320 indexes
...211 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 12:36:13
...Completed key object existence check 12:36:13
...Setting DBMS Registry 12:36:13
...Setting DBMS Registry Complete 12:36:13
...Exiting validate 12:36:13

PL/SQL procedure successfully completed.

SQL>

SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.



SQL> SELECT NAME,OPEN_MODE FROM V$PDBS;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB_GAVIN                      READ WRITE


Wrong Results On Query With Subquery Using OR EXISTS After upgrade to 12.1.0.2

$
0
0

Recently one my clients encountered an issue with a SQL query which returned no rows in the 12c database which had been upgraded, but was returning rows in any of the 11g databases which had not been upgraded as yet.

The query was


SELECT *
  FROM STORAGE t0
  WHERE ( ( ( ( ( ( (ROWNUM <= 30) AND (t0.BUSINESS_UNIT_ID = 2))   AND (t0.PLCODE = 1001))
                  AND (t0.SM_SERIALNUM = '5500100000149000994'))
                  AND ( (t0.SM_MODDATE IS NULL) OR (t0.SM_MODDATE <= SYSDATE)))
                AND   ( 
                        (t0.DEALER_ID IS NULL)
                         OR 
                        EXISTS   (SELECT t1.CUSTOMER_ID  FROM CUSTOMER_ALL t1 WHERE ( (t1.CUSTOMER_ID = t0.DEALER_ID) AND (t1.CSTYPE <> 'd')))
                        )
        )
        AND (t0.SM_STATUS <> 'b'));

If we added the hint /*+ optimizer_features_enable(‘11.2.0.4’) */ to the query it worked fine.

After a bit of investigation we found that we could possibly be hitting this bug

Bug 18650065 : WRONG RESULTS ON QUERY WITH SUBQUERY USING OR EXISTS

The solution was either to enable this hidden parameter at the session or database level or to apply the patch 18650065 which is now available for download from MOS.

ALTER SESSION SET “_optimizer_null_accepting_semijoin”=FALSE;

The patch 18650065 can be applied online in both a non-RAC as well as RAC environment

For Non-RAC Environments 

$ opatch apply online -connectString orcl:SYS:SYS_PASSWORD

For RAC Environments

2 node RAC example:

$ opatch apply online -connectString orcl1:SYS:SYS_PASSWORD:node1, orcl2:SYS:SYS_PASSWORD:node2


Oracle 12c Resource Manager – CDB and PDB resource plans

$
0
0

In a CDB since we have multiple pluggable databases sharing a set of common resources, we can prevent multiple workloads to compete with each other for both system as well as CDB resources by using Resource Manager.

Let us look at an example of managing resources for Pluggable Databases (between PDB’s) at the multitenant Container database level as well as within a particular PDB.

The same can be achieved using 12c Cloud Control, but displayed here are the steps to be performed at the command line using the DBMS_RESOURCE_MANAGER package.

With Resource Manager at the Pluggable Database level, we can limit CPU usage of a particular PDB as well as the number of parallel execution servers which a particular PDB can use.

To allocate resources among PDB’s we use a concept of shares where we assign shares to particular PDB’s and a higher share to a PDB results in higher allocation of guaranteed resources to that PDB.

At a high level the steps involved include:

• Create a Pending Area

• Create a CDB resource plan

• Create directives for the PDB’s

• Optionally update the default directives which will specify resources which any newly created PDB’s will be allocated or which will be used when no directives have been explicitly defined for a particular PDB

• Optionally update the directives which apply by default to the Automatic Maintenance Tasks which are configured to run in the out of the box maintenance windows

• Validate the Pending Area

• Submit the Pending Area

• Enable the plan at the CDB level by setting the RESOURCE_MANAGER_PLAN parameter

Let us look at an example.

We have 5 Pluggable databases contained in the Container database and we wish to enable resource management at the PDB level.

We wish to guarantee CPU allocation in the ratio 4:3:1:1:1 so that the CPU is distributed among the PDB’s in this manner:

PDBPROD1 : 40%
PDBPROD2: 30%
PDBPROD3: 10%
PDBPROD4 : 10%
PDBPROD5: 10%

Further for PDB’s PDBPROD3, PDBPROD4 and PDBPROD5 we wish to ensure that CPU utilization for these 3 PDB’s never crosses the 70% limit.

Also for these 3 PDB’s we would like to limit the maximum number of parallel execution servers available to the PDB.

The value of 70% means that if the PARALLEL_SERVERS_TARGET initialization parameter is 200, then the PDB cannot use more than a maximum of 140 parallel execution servers. For PDBPROD1 and PDBPROD2 there is no limit, so they can use all 200 parallel execution servers if available.

We also want to limit the resources used by the Automatic Maintenance Tasks jobs when they do execute in a particular job window and also want to specify a default resource allocation limit for newly created PDB’s or those PDB’s where a resource limit directive has not been explicitly defined.

Download the note …

Oracle Exadata X5-2 Data Guard Configuration

$
0
0

This note describes the procedure of creating an Oracle 11.2.0.4 Data Guard Physical Standby database with a two-node Real Application Cluster (RAC) Primary and Standby database on an Oracle Exadata X5-2 eight rack.

The procedure will use RMAN for the creation of the Physical Standby database and will use the DUPLICATE FROM ACTIVE DATABASE method which is available in Oracle 11g.

Note – creation of the Standby database is done online while the Primary database is open and being accessed and no physical RMAN backups are utilized for the purpose of creating the standby database.

The note also describes the process of configuring Data Guard Broker to manage the Data Guard environment and also illustrates how to perform a database role-reversal via a Data Guard switch over operation.

 Download the full note ….

Oracle Database In-Memory 12c Release 2 New Features

$
0
0

The Oracle Database In-Memory 12c Release 2 New Features  webinar conducted last week was well received by a global audience and feedback was positive. For those who missed the session you can download the slide deck from the link below. Feedback and questions are welcomed!

12.2_InMemory_new_features

Oracle Database 12c Release 2 (12.2.0.1) upgrade using DBUA

$
0
0

Oracle 12c Release 2 (12.2.0.1) was officially released for on-premise deployment yesterday. I tested an upgrade of one of my test 12.1.0.2 databases using the Database Upgrade Assistant (DBUA) and the upgrade went smoothly.

The Parallel Upgrade command line utility catctl.pl has a number of changes and enhancements as compared to 12c Release 1 and I will discuss that in a later post.

Here are the screen shots of the database upgrade process.

 

upg2

 

 

upg1

 

 

upg3

 

 

 

 

 

upg5

 

 

upg6

 

 

upg7 upg8

 

 

upg9

 

 

Note – I only converted my database to NOARCHIVELOG mode because I did not have the recommended free space in the FRA. Don’t do this in production because ideally you would want to either take a backup of archivelogs or last incremental level 1 backup or set a Guaranteed Restore Point so as to be able to Flashback the database if required.

But I did see that the redo generated by the upgrade process seems to far more than that in case of earlier version upgrades. Even the DBUA recommendation was to double the Fast Recovery Area space allocation.

 

upg10

 

 

 

upg11

 

 

upg12

 

 

upg13

 

 

upg14

 

 

upg15

 

 

 

upg16

 

 

 

upg17

 

 

upg19

 

 

upg20

 

 

upg21

 

 

upg22

Oracle 12c Release 2 (12.2.0.1.0) Grid Infrastructure Upgrade

$
0
0

I recently performed an upgrade of an Oracle 12c Release 1 (12.1.0.2) Grid Infrastructure environment hosted on a RAC Virtual Box environment on my laptop to the latest release 12c Release 2 12.2.0.1.0 version.

Here are some points to be noted related to the upgrade process:

    • The Grid Infrastructure 12c Release 2 (12.2) software is now available as a single image file for direct download and installation. This greatly simplifies and enables a much quicker installation of the Grid Infrastructure software.

 

    • We just have to extract the image file linuxx64_12201_grid_home.zip into an empty directory where we want the Grid home to be located.

 

    • Once the software has been extracted we have to run the gridSetup.sh script which will launch the installer where we can perform both an initial install as well as an upgrade.

 

    • We need to have about 33 GB of free disk space in the ASM disk groups for the upgrade process.

 

    • The mount point which hosts the Grid Infrastructure home needs to have at least 12 GB of free disk space.

 

    • It is now mandatory to store the Oracle Clusterware files like the Cluster Registry (OCR) and Voting Disks on ASM and we cannot locate these files now on any kind of other shared storage system.

 

    • We have to install a mandatory patch 21255373 to the Grid Infrastructure software home.  We will see that in this case a number of prerequisite checks have failed related to memory (now needs 8 GB minimum RAM on each node ) as well as other checks related to swap size, NTP and resolv.conf – since this is test Virtual Box environment we can ignore those and continue with the upgrade – however we cannot ignore the mandatory patch 21255373 which needs to be applied to the existing 12.1.0.2 Grid Infrastructure home.

 

    • In order to install the patch, we have to also download the opatch patch 6880880 for Oracle 12.2.0.1.0 (opatch 12.2.0.1.8).

 

    • When we run opatchauto to apply the patch 21255373, we get an error java.text.ParseException: Unparseable date.  This is because time zone TZ entry AWST (Australian Western Standard Time)  is added into $ORACLE_HOME/inventory/ContentsXML/comps.xml and the opatch is using $ORACLE_HOME/jdk/jre, which is version 1.6. Java 1.6 is not able read the TZ entry AWST. We can ignore the error and continue with  the patch application – but after the patch has been applied, we have to change all the occurrences of string “AWST” in the comps.xml file to “WST” – otherwise even though we have applied the patch, the command opatch lsinventory will not show that the patch has been applied until the date format string in the comps.xml is changed as mentioned earlier.

 

    • Upgrade failed at 46% in the phase Execute Root Scripts. Ran the command crsctl stop crs -f as root on each node and clicked on the Retry button and the upgrade then continued without any error

 

    • At the end of the upgrade, the Cluster Verification Utility fails because it checks for NTP configuration appropriate for an Oracle RAC environment. NTP is not configured on this Virtual Box environment so we can ignore the error

 

 

Here are some screen shots captured while the 12c Release 2 Grid Infrastructure upgrade was in progress…..

 

gi1

 

 

gi2

 

 

gi3

 

 

gi4

 

 

 

gi5

 

 

 

gi6

 

 

 

gi7

 

 

 

gi8

 

 

gi9

 

 

 

gi10

 

 

 

gi11

 

 

 

gi12

 

 

 

gi13

 

 

 

gi14

 

 

 

gi15

 

 

 

 

gi16

 

 

 

 

gi17

 

 

 

gi18

 

 

 

gi19

 

 

Note:

Change all occurrences of “AWST” to “WST” in comps.xml file

 

gi22

 

 

Now opatch lsinventory command will show that the patch 21255373 has been applied.

 

gi23 gi24

 

 

 

gi25

 

 

 

 

gi26

 

 

 

gi27

 

 

 

gi28

 

 

 

gi29

 

 

 

gi30

 

 

 

gi31

 

 

 

gi32

 

 

 

gi33

 

 

 

gi34

 

 

 

gi35

 

 

 

gi36

 

 

gi37


Oracle Database 12c Release 2 New Feature – Create Data Guard Standby Database Using DBCA

$
0
0

One of the real nice new features in Oracle 12c Release 2 (12.2.0.1) is the ability to create an Oracle Data Guard Standby Database using DBCA (Database Configuration Assistant). This really does simplify the process of creating a standby database as well and automates a number of steps in the creation process which were earlier manually performed.

In this example we will see how a 12.2.0.1 Data Guard environment is created via DBCA and then Data Guard Broker (DGMGRL).

The source database is called salesdb and the standby database DB_UNIQUE_NAME will be salesdb_sb.

Primary database host name is host01 and the Standby database host name is host02.

The syntax is:

dbca -createDuplicateDB 
    -gdbName global_database_name 
    -primaryDBConnectionString easy_connect_string_to_primary
    -sid database_system_identifier
    [-createAsStandby 
        [-dbUniqueName db_unique_name_for_standby]]

We will run the command from the standby host host02 as shown below.
 

[oracle@host02 ~]$ dbca -silent -createDuplicateDB -gdbName salesdb -primaryDBConnectionString host01:1521/salesdb -sid salesdb -createAsStandby -dbUniqueName salesdb_sb
Enter SYS user password:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/salesdb_sb/salesdb.log" for further details.

Connect to the Standby Database and verify the role of the database
 
dg1

 

Note that the SPFILE and Password File for the Standby Database has been automatically created

[oracle@host02 dbs]$ ls -l sp*
-rw-r-----. 1 oracle dba 5632 Mar 22 09:40 spfilesalesdb.ora

[oracle@host02 dbs]$ ls -l ora*
-rw-r-----. 1 oracle dba 3584 Mar 17 14:38 orapwsalesdb

 

Add the required entries to the tnsnames.ora file

dg2

Continue with the Data Guard Standby Database creation using the Data Guard Broker
 

SQL> alter system set dg_broker_start=true scope=both;

System altered.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@host01 archivelog]$ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Mar 17 14:47:27 2017

connect /
Connected to "salesdb"
Connected as SYSDG.
DGMGRL> create configuration 'salesdb_dg'
> as primary database is 'salesdb'
> connect identifier is 'salesdb';
Configuration "salesdb_dg" created with primary database "salesdb"

DGMGRL> add database 'salesdb_sb' as connect identifier is 'salesdb_sb';
Database "salesdb_sb" added
DGMGRL> enable configuration;
Enabled.

 

Create the Standby Redo Log Files on the primary database

 

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u03/app/oradata/salesdb/redo03.log
/u03/app/oradata/salesdb/redo02.log
/u03/app/oradata/salesdb/redo01.log

SQL> select bytes/1048576 from v$log;

BYTES/1048576
-------------
     200
     200
     200


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

 
Create the Standby Redo Log Files on the standby database

 

DGMGRL> connect /
Connected to "salesdb"
Connected as SYSDG.

DGMGRL> edit database 'salesdb_sb' set state='APPLY-OFF';
Succeeded.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1174405120 bytes
Fixed Size          8619984 bytes
Variable Size           436209712 bytes
Database Buffers   721420288 bytes
Redo Buffers              8155136 bytes
Database mounted.

SQL>  alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

SQL> alter database open;

Database altered.

SQL>

 
Verify the Data Guard Configuration
 

DGMGRL> edit database 'salesdb_sb' set state='APPLY-ON';
Succeeded.


DGMGRL> show configuration;

Configuration - salesdb_dg

 Protection Mode: MaxPerformance

 salesdb    - Primary database
   salesdb_sb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 8 seconds ago)

 
Set the property StaticConnectIdentifier to prevent errors during switchover operations
 

Edit database ‘salesdb’ set property StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host01.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';
Edit database ‘salesdb_sb’ set property StaticConnectIdentifier=StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host02.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_sb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';

Edit listener.ora on primary database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

 
Edit listener.ora on standby database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_sb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

Oracle Database 12c Release 2 New Feature – Application Containers

$
0
0

One of the new multitenancy related features in Oracle 12c Release 2 is Application Containers.

In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs but also have another component called an Application Container which in essence is a hybrid of a CDB and a PDB.

So now in 12.2.0.1, a CDB can contain (optionally) user created Application Containers and then Application Containers can in turn host one or more PDBs.

For example, an Application Container can contain a number of PDBs which contain individual sales data of different regions, but at the same time can share what are called common objects.

Maybe each region’s PDB has data just for that region, but the table structure is the same regardless of the region. In that case the table definition (or metadata) is stored in the application container accessible to all the PDBs hosted by that application container. If any changes are required to be made for application tables, then that DDL change need only be made once in the central application container and that change will then be visible to all the PDBs hosted by that application container.

Or there are some tables which are common to all the PDBs – some kind of master data maybe. And rather than have to store this common data in each individual PDB (as was the case in 12.1.0.2), we just store it once in a central location which is the application container and then that data is visible to all the hosted PDBs.

In other words, an application container functions as an application specific CDB within a CDB.

Think of a Software as a Service (SaaS) deployment model where we are hosting a number of customers and each customer has its own individual data which needs to be stored securely in a separate database but at the same time we need to share some metadata or data which is common to all the customers.

Let’s have a look a simple example of 12c Release 2 Application Containers at work.

The basic steps are:

  • Create the Application Container
  • Create the Pluggable Databases
  • Install the Application
  • After installing the application, synchronize the pluggable databases with the application container root so that any changes in terms of DDL or DML made by the application are now visible to all hosted pluggable databases
  • Optionally upgrade or deinstall the application

 

Create the Application Container
 

SQL> CREATE PLUGGABLE DATABASE appcon1 AS APPLICATION CONTAINER ADMIN USER appadm IDENTIFIED BY oracle
FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/');  

Pluggable database created.

 
Create the Pluggable Databases which are to be hosted by the Application Container by connecting to the application container root
 

SQL> alter session set container=appcon1;

 

Session altered.

 

SQL> alter pluggable database open;

 

Pluggable database altered.

 

SQL> CREATE PLUGGABLE DATABASE pdbhr1 ADMIN USER pdbhr1_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr1/');

 

Pluggable database created.

 

SQL> SQL> CREATE PLUGGABLE DATABASE pdbhr2 ADMIN USER pdbhr2_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr2/');

 

Pluggable database created.

 

SQL> SQL> alter pluggable database all open;

 

Pluggable database altered.

 

Install the application
 
In the first example we will be seeing how some common data is being shared among all the pluggable databases. Note the keyword SHARING=DATA.
 

SQL> alter pluggable database application region_app begin install '1.0';

 

Pluggable database altered.

 

SQL> create user app_owner identified by oracle;

 

User created.

 

SQL> grant connect,resource,unlimited tablespace to app_Owner;

 

Grant succeeded.

 

SQL> create table app_owner.regions

2  sharing=data

3  (region_id number, region_name varchar2(20));

 

Table created.

 

SQL> insert into app_owner.regions

2  values (1,'North');

 

1 row created.

 

SQL> insert into app_owner.regions

2  values (2,'South');

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> alter pluggable database application region_app end install '1.0';

 

Pluggable database altered.

 

View information about Application Containers via the DBA_APPLICATIONS view

 

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
NORMAL

Synchronize the pluggable databases with the application root
 
Note that until this is done, changes made by the application install are not visible to the hosted PDBs.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> select * from app_owner.regions;
select * from app_owner.regions
                        *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

SQL> alter session set container=pdbhr2;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

 

Note that any direct DDL or DML is not permitted in this case
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> insert into app_owner.regions values (3,'East');
insert into app_owner.regions values (3,'East')
                      *
ERROR at line 1:
ORA-65097: DML into a data link table is outside an application action

Let us now upgrade the application we just created and create the same application table, but this time with the keyword SHARING=METADATA
 

SQL> alter pluggable database application region_app begin upgrade '1.0' to '1.1';

Pluggable database altered.

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
UPGRADING


SQL> drop table app_owner.regions; 

Table dropped.

SQL> create table app_owner.regions
  2  sharing=metadata
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
We can now see that the table definition is the same in both the PDBs, but each PDB can now insert its own individual data in the table.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    	NUMBER
 REGION_NAME					VARCHAR2(20)

SQL> insert into app_owner.regions 
  2  values (1,'North');

1 row created.

SQL> insert into app_owner.regions 
  2  values (2,'North-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-East

SQL> alter session set container=pdbhr2;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    NUMBER
 REGION_NAME					    VARCHAR2(20)

SQL> select * from app_owner.regions;

no rows selected

SQL> insert into app_owner.regions 
  2  values (1,'South');

1 row created.

SQL> insert into app_owner.regions 
  2  values (2,'South-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

 
While DML activity was permitted in this case, still any DDL activity is not permitted.
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> alter table app_owner.regions 
  2  add (region_location varchar2(10));
alter table app_owner.regions
*
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action

 
We will now perform another upgrade to the application and this time note the keyword SHARING=EXTENDED DATA. In this case while some portion of the data is common and shared among all the PDBs, the individual PDBs can still have the flexibility to have additional data specific to that PDB stored in the table along with the common data which is the same for all the PDBs.
 


SQL> alter session set container=appcon1;

Session altered.

SQL> alter pluggable database application region_app begin upgrade '1.1' to '1.2';

Pluggable database altered.

SQL> drop table app_owner.regions;

Table dropped.

SQL> create table app_owner.regions
  2  sharing=extended data
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> insert into app_owner.regions
  2  values (1,'North');

1 row created.

SQL> commit;

Commit complete.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
Note that the PDBs share some common data, but individual PDB can insert its own data.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North




SQL> insert into app_owner.regions 
  2  values
  3  (2,'North-West');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-West

SQL> alter session set container=pdbhr2;

Session altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North

Oracle Database 12.2 New Feature – Pluggable Database Performance Profiles

$
0
0

In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.

Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition we can also restrict the amount of memory that each PDB hosted by a Container Database (CDB) uses.

Further, we can also limit the amount of I/O operations that each PDB performs so that now we have a far improved Resource Manager at work ensuring that no PDB hogs all the CPU or the IO because of maybe some runaway query and thereby impacts the other PDBs hosted in the same PDB.

We can now limit the amount of SGA or PGA that an individual PDB can utilize as well as ensure that certain PDBs always are ensured a minimum level of both available SGA and PGA memory.

For example we can now issue SQL statements like these while connected to the individual PDB.

 

SQL> ALTER SYSTEM SET SGA_TARGET = 500M SCOPE = BOTH;

SQL> ALTER SYSTEM SET SGA_MIN_SIZE = 300M SCOPE = BOTH;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 200M SCOPE = BOTH;

SQL> ALTER SYSTEM SET MAX_IOPS = 10000 SCOPE = BOTH;

 
Another 12c Release 2 New Feature related to Multitenancy is Performance Profiles.

With Performance Profiles we can manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead for each individual PDB.

These profiles are then allocated to the PDB via the initialization parameter DB_PERFORMANCE_PROFILE

Let us look at a worked example of Performance Profiles.

In this example we have three PDBs (PDB1, PDB2 and PDB3) hosted in the container database CDB1. PDB1 pluggable database hosts some mission critical applications and we need to ensure that PDB1 gets a higher share of memory,I/O as well as CPU resources as compared to PDB2 and PDB3.

So we will be enforcing this resource allocation via two sets of Performance Profiles – we call those TIER1 and TIER2.

Here are the steps:

 

Create a Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA ();

PL/SQL procedure successfully completed.

 

 

Create a CDB Resource Plan
 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(

   plan   => 'profile_plan',

   comment => 'Performance Profile Plan allocating highest share of resources to PDB1');

END;

/ 

PL/SQL procedure successfully completed.

 
Create the CDB resource plan directives for the PDBs

Tier 1 performance profile ensures at least 60% (3 shares) of available CPU and parallel server resources and no upper limit on CPU utilization or parallel server execution. In addition it ensures a minimum allocation of at least 50% of available memory.

 

SQL> BEGIN

DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

plan                 => 'profile_plan',

profile              => 'Tier1',

shares               => 3,

memory_min           => 50);

END;

/

PL/SQL procedure successfully completed.

 

Tier 2 performance profile is more restrictive in the sense that it has fewer shares as compared to Tier 1 and limits the amount of CPU/Parallel server usage to 40% as well as limits the amount of memory usage at the PDB level to a maximum of 25% of available memory.

 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

   plan                 => 'profile_plan',

   profile              => 'Tier2',

   shares               => 2,

   utilization_limit    => 40,

   memory_limit          => 25);

END;

/   

PL/SQL procedure successfully completed.

 

Validate and Submit the Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.


 
Allocate Performance Profiles to PDBs

 

TIER1 Performance Profile is allocated to PDB1 and TIER2 Performance Profile is allocated to PDB2 and PDB3.

 

SQL> alter session set container=pdb1;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER1' scope=spfile;

System altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

 

Set the Resource Plan at the CDB level

 

SQL> conn / as sysdba

Connected.

SQL> alter system set resource_manager_plan='PROFILE_PLAN' scope=both;

System altered.

 

Set the Performance Profiles at the PDB level

 

SQL> alter pluggable database all close immediate;

Pluggable database altered.

SQL> alter pluggable database all open;

Pluggable database altered.


 
Monitor memory utilization at PDB level

 

The V$RSRCPDBMETRIC view enables us to track the amount memory used by PDBs.

We can see that the PDB1 belonging to the profile TIER1 has almost double the memory allocated to the other two PDBs in profile TIER2.

Oracle 12.2 has a lot of new exciting features. Learn all about these at a forthcoming online training session. Contact prosolutions@gavinsoorma.com to register interest!

Oracle Database 12.2 New Feature – PDB Lockdown Profiles

$
0
0

In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.

We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled – all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.

This is done via the new 12.2 feature called Lockdown Profiles.

We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.

To assign the lockdown profile to a particular PDB, we use the PDB_LOCKDOWN initialization parameter which will contain the name of the lockdown profile we have earlier created.

If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB’s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB’s as we will see in the example below.

Let us have a look at an example of PDB Lockdown Profiles at work.

In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.

Our requirements are the following:

  • We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered – so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.
  • To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself
  • The Partitioning feature is not available in PDB2

 

Create the Lockdown Profiles
 

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create lockdown profile pdb1_profile;

Lockdown Profile created.

SQL> create lockdown profile pdb2_profile;

Lockdown Profile created.

 
Alter Lockdown Profile pdb1_profile
 

SQL> alter lockdown profile pdb1_profile
     disable statement =('ALTER SYSTEM') 
     clause=('SET')
     OPTION = ('SGA_TARGET');

Lockdown Profile altered.



SQL> alter lockdown profile pdb1_profile 
     disable statement =('ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE');

Lockdown Profile altered.

 
Alter Lockdown Profile pdb2_profile
 

SQL> alter lockdown profile pdb2_profile 
     DISABLE OPTION = ('PARTITIONING');

Lockdown Profile altered.

 
Enable the Lockdown Profiles for both PDB1 and PDB2 pluggable databases
 

SQL> conn / as sysdba
Connected.

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB1_PROFILE';

System altered.

SQL> alter session set container=PDB2;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB2_PROFILE';

System altered.

 

Connect to PDB1 and try and increase the value of the parameter SGA_TARGET and PGA_AGGREGATE_TARGET

 
Note that we cannot alter SGA_TARGET because it is prevented by the lockdown profile in place, but we can alter PGA_AGGREGATE_TARGET because the lockdown profile clause only applies to the ALTER SYSTEM SET SGA_TARGET command.
 

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set sga_target=800m;
alter system set sga_target=800m
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set pga_aggregate_target=200m;

System altered.

 
Connect to PDB2 and try and create a partitioned table
 

SQL> CREATE TABLE testme
     (id NUMBER,
      name VARCHAR2 (60))
   PARTITION BY HASH (id)
   PARTITIONS 4    ;
CREATE TABLE testme
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

 
Connect to PDB1 and try to shutdown the pluggable database
 
Note that while we cannot shutdown PDB1, we are able to shutdown PDB2.
 

SQL> alter session set container=pdb1;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> alter session set container=pdb2;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.

 

Upgrade Grid Infrastructure 11g (11.2.0.3) to 12c (12.1.0.2)

$
0
0

I have recently tested the upgrade to RAC Grid Infrastructure 12.1.0.2 on my test RAC Oracle Virtualbox Linux 6.5 x86-64 environment.

The upgrade went very smoothly but we have to take a few things into account – some things have changed in 12.1.0.2 as compared to Grid Infrastructure 12.1.0.1.

The most notable change regards the Grid Infrastructure Management Repository (GIMR).

In 12.1.0.1 we had the option of installing the GIMR database – MGMTDB. But in 12.1.0.2 it is mandatory and the MGMTDB database is automatically created as part of the upgrade or initial installation process of 12.10.2 Grid Infrastructure.

The GIMR primarily stores historical Cluster Health Monitor metric data. It runs as a container database on a single node of the RAC cluster.

The problem I found is that the datafiles for the MGMTDB database are created on the same ASM disk group which holds the OCR and Voting Disk and there is a prerequisite that there is at least 4 GB of free space in that ASM disk group – or an error INS-43100 will be returned as shown in the figure below.

I had to cancel the upgrade process and add another disk to the +OCR ASM disk group to ensure that at least 4 GB of free space was available and after that the upgrade process went through very smoothly.

 

 

 

On both the nodes of the RAC cluster we will create the directory structure for the 12.1.0.2 Grid Infrastructure environment as this is an out-of-place upgrade.

Also it is very important to check the health of the RAC cluster before the upgrade (via the crsctl check cluster -all command) and also run the cluvfy.sh script to verify all the prerequisites for the 12c GI upgrade are in place.

[oracle@rac1 bin]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]

[oracle@rac1 bin]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]

[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u02/app/12.1.0/grid -dest_version 12.1.0.2.0

 

 

 

 

[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@rac1 ~]$ ps -ef |grep pmon
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1

[root@rac1 bin]# ./oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.873212089

[root@rac1 bin]# ./srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.

[root@rac1 bin]# ./srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB

Viewing all 77 articles
Browse latest View live


Latest Images