About Me

Sunday 16 September 2012

Using RMAN Incremental Backups to Roll Forward a Physical Standby Database in 10 Steps



There are several scenarios that can occur in an Oracle Data Guard configuration which renders a physical standby database as either unusable or invalid. For example, an extended network failure can occur between the primary and standby machines which cause the standby database to fall significantly far behind the primary database. If the necessary archived redo log files needed to synchronize the physical standby database with the primary database are no longer available, log gap resolution will be unable to resolve the gap in the redo stream to catch up the standby.

1.       On the physical standby database, stop the managed recovery process (MRP):
   
    SQL> alter database recover managed standby database cancel;

2.      On the physical standby database, find the SCN which will be used for the incremental backup at the primary database, as the backup must be created from this SCN forward.

         SQL> select current_scn from v$database;
 
                 CURRENT_SCN
                 -----------
                                      3744425

3.       Record the SCN returned from the above query for the next step.

4.       Using RMAN, connect to the primary database and create an incremental backup from the SCN derived in the previous step.
         RMAN> backup incremental from scn 3744425 database format                     
                       '/u01/bkp/ForStandby_%U.inc' tag 'FORSTANDBY';
 
5.      Create new standby controlfile on production database.
  
     SQL>alter database create standby controlfile as 

                '/u01/bkp/CONTROL01.CTL’;

6.      Ship all the incremental backups and new standby controlfile to DR site. Let me assume that we have created '/u01/bkp/’ directory at DR site also to keep all the incremental backups and new standby controlfile.

7.       After copying the incremental backup pieces to the physical standby machine, connect to the physical standby database using RMAN and catalog those incremental backup pieces. Then recover the standby database with the cataloged incremental backup pieces.

       RMAN> catalog start with '/u01/bkp/’;

       RMAN> recover database noredo;

8.      Shutdown the standby database and rename all the old standby controlfiles.  
Replace all the old standby controlfile with new ones that you have brought from production site.

RMAN> shutdown;

9.      Now we can start the physical standby database.

           RMAN> startup mount;

10.  Keep physical standby database again in managed recovery mode and take some 
   Manual log switches at production site to see whether archives are transferring to  
   Recovered standby and applying or not. Wait and test again.

   SQL>alter database recover managed standby database disconnect from session;

   SQL>select sequence#, archived, applied from v$archived_log;



Wednesday 11 April 2012

EXTENT MANAGEMENT and SEGMENT SPACE MANAGEMENT


Many of the people often get confused between segment space management and extent management. These two terms are totally different and can be understood in following way.

EXTENT MANAGEMENT- Whenever a segment will be full and needs extra space then oracle automatically allocates space in the forms of extents(collection of continuous blocks) .How this extent will be allocated and What will be size of this newly allocated extent is decided by extent management according to storage parameters. You can define storage parameters (initial,next,minextents,maxextents,pctincrease...) in case of dictionary managed tablespace(DMTS) but not for locally managed tablespaces(LMTS). In LMTS we have options of AUTOALLOCATE  and UNIFORM. If we give AUTO then Oracle takes the default value of storage parameter like 64k for INITIAL and MINEXTENT 1 and so on. In case of UNIFORM , a uniform size will be given to a newly allocated extent always( 1 mb in case of temporary tablespace which takes UNIFORM  extent size).

SEGMENT SPACE MANAGEMENT- SSM is related to management of free blocks in segments and can be AUTO(recommended) or MANUAL in case of LMTS and MANUAL in case of DMTS. In MANUAL ,oracle maintains a list of free blocks in the form of linked list and has to scan all the blocks(costly) to update this list with new information . In AUTO ,oracle maintains a bitmap to record the information of free blocks . If bit is 1 then block is not free and free if bit is 0. Default is AUTO. In this way AUTO is beneficial than the MANUAL method of segment space management. If SSM is AUTO then we call it as ASSM (automatic segment space management).

Friday 23 March 2012

ORACLE STREAMS REPLICATION AT SCHEMA LEVEL

Oracle Streams can be used to share data and messgaes between two oracle databases at different levels Unlike Oracle dataguard logical standby database ,Oracle streams will have two or more databases independent of each other and similar at replication level only. Concept of Oracle Streams is similar to logical standby but streams is more popular.
for example --

1.Table level ( 1 or more tables are replicated between two Oracle databases)
2.Schema level ( 1 or more schemas are replicated between two Oracle databases)
3.Tablespace level ( 1 or more tablespaces are replicated between two Oracle databases)
4.Database level ( changes in one database are replicated in another database)

Oracle streams is flow of information from one database to another database by capturing information at one database ,staged and applied to another database.Oracle uses advance queuing concept to capture,propagate and apply the the data between databases.


Oracle streams setup can be
1.Homogeneous ( between two oracle databases)
2.Heterogeneous (between oracle and non oracle databases)
 
In thes post i have replicated a schema called TEST1 between two databases and have taken

1. Source  database (SRCDB) on server DBA1
2.Target database (TGTDB) on server DBA2

On source database and target database do the follwing steps

--create a tablespace dedicated for stream

create tablespace strts datafile '/disk1/oradata/srcdir/strts.dbf' size 100m;

-- create stream admin user
create user stradm identified by stradm default tablespace strts;

-- grant dba and streams related privs to streams admin user

grant dba to stradm;
SYS>>exec dbms_streams_auth.grant_admin_privilege('STRADM');

At source and target databases

STRADM>>alter database add supplemental log data;

Database altered.


listner at source database

srclist =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = dba3)(PORT = 7546))
      )
    )
  )

SID_LIST_srclist =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = src)
      (ORACLE_HOME = /oraeng/app/oracle/product/11.2.0)
    )
  )

listener at target side

tgtlist =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = dba4)(PORT = 7589))
      )
    )
  )

SID_LIST_tgtlist=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = tgt)
      (ORACLE_HOME = /oraeng/app/oracle/product/11.2.0)
    )
  )

                                                         
tns at source

to_tgtlist =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST=dba4)(PORT=7589))
    )
    (CONNECT_DATA =
      (SID = tgt)
    )
  )

tns  at target side

to_srclist =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST=dba3)(PORT=7546))
    )
    (CONNECT_DATA =
      (SID = src)
    )
  )

streams parameter at source

###########source side streams parameters###################
db_unique_name=srcdb
log_archive_dest_1='location=/disk1/oradata/srcdir/arch valid_for=(online_logfiles,primary_role)mandatory'
log_archive_config='send,dg_config=(srcdb,tgtdb)'
log_archive_dest_2='service=to_tgtlist lgwr async noregister valid_for=(online_logfiles,primary_role) db_unique_name=tgtdb'
aq_tm_processes=1
shared_pool_size=400m
job_queue_interval=1
job_queue_processes=20
log_archive_dest_state_1=enable
log_archive_dest_state_2=defer
processes=150
statistics_level=typical
timed_statistics=true
streams_pool_size=500m
db_domain=DHARAM.com
log_archive_max_processes=10
global_names=true


target side parameter

############target streams parameters################
db_unique_name=tgtdb
log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(online_logfiles,primary_role)'
log_archive_config='receive,dg_config=(srcdb,tgtdb)'
log_archive_dest_2='location=/disk1/oradata/tgtdir/arch valid_for=(standby_logfiles,primary_role)'
aq_tm_processes=1
shared_pool_size=400m
job_queue_interval=1
job_queue_processes=20
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
processes=150
statistics_level=typical
timed_statistics=true
streams_pool_size=500m
db_domain=DHARAM.com
log_archive_max_processes=10
global_names=true
db_recovery_file_dest=/disk1/oradata/tgtdir/flash
db_recovery_file_dest_size=500m

on source side create password file and send to target side

[dharam@dba3 dbs]$ orapwd file=orapw$ORACLE_SID password=sys

[dharma@dba3 dbs]$ ls orapwsrc
orapwsrc

on target database

rename the password file according to instance name of target

[dharma@dba4 dbs]$ mv orapwsrc orapwtgt

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB

STRADM>>alter database rename global_name to tgtdb.DHARAM.com;


Database altered.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB.DHARAM.COM

on source side create database link which points to target database.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB.DHARAM.COM

STRADM>>create database link TGTDB.DHARAM.COM connect to stradm identified by stradm using 'to_tgtlist';

STRADM>>select * from dual@TGTDB.DHARAM.COM;

D
-
X


STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
SRCDB

STRADM>>alter database rename global_name to srcdb.DHARAM.com;

Database altered.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
SRCDB.DHARAM.COM

on target database

STRADM>>create database link SRCDB.DHARAM.COM connect to stradm identified by stradm using 'to_srclist';

Database link created.

STRADM>>select * from dual@SRCDB.DHARAM.COM;

D
-
X

on source

[dharam@dba3 ~]$ mkdir /disk1/oradata/srcdir/strdir

STRADM>>create directory strdir as '/disk1/oradata/srcdir/strdir';

Directory created.

on target

[dharma@dba4 ~]$ mkdir /disk1/oradata/tgtdir/strdir

STRADM>>create directory strdir as '/disk1/oradata/tgtdir/strdir';

Directory created.

STRADM>>alter database add standby logfile ('/disk1/oradata/tgtdir/str1.log','/disk1/oradata/tgtdir/str2.log','/disk1/oradata/tgtdir/str3.log') size 10m;

STRADM>>select group#,sequence#,status from v$standby_log;

    GROUP#  SEQUENCE# STATUS
---------- ---------- ----------
         3          0 UNASSIGNED


in parameter file of source

log_archive_dest_state_2=enable

SYS>>startup force
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area  966729728 bytes
Fixed Size                  1340552 bytes
Variable Size             943721336 bytes
Database Buffers           16777216 bytes
Redo Buffers                4890624 bytes
Database mounted.
Database opened.

STRADM>>alter system switch logfile;

System altered.

STRADM>>/

System altered.

STRADM>>select dest_id,status,error from v$archive_dest_status;

SYS>>grant connect,resource to test1 identified by test1;

Grant succeeded.

TEST1>>@?/sqlplus/demo/demobld
Building demonstration tables.  Please wait.
Demonstration table build is complete.

on target side run the following

[dharma@dba4 ~]$ vi ms.sql

 begin
      dbms_streams_adm.maintain_schemas(
      schema_names=>'test1',
      source_database=>'SRCDB.DHARAM.COM',
      source_directory_object=>'strdir',
      destination_database=>'TGTDB.DHARAM.COM',
      destination_directory_object=>'strdir',
      capture_queue_name=>'DS_STREAMS_QUEUE',
      apply_queue_name=>'DS_STREAMS_QUEUE',
      bi_directional=>false,
     include_ddl=>false);
    end;

[dharma@dba4 ~]$ sqlplus stradm/stradm

SQL*Plus: Release 11.2.0.1.0 Production on Thu Mar 22 14:39:33 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

STRADM>>@ms.sql

PL/SQL procedure successfully completed.


STRADM>>select username from dba_users;

User-Name
----------
OUTLN
SYS
SYSTEM
TEST1
STRADM
APPQOSSYS
DBSNMP
DIP
ORACLE_OCM

9 rows selected.

TEST1>>select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
BONUS                          TABLE
DEPT                              TABLE
DUMMY                        TABLE
EMP                                TABLE
SALGRADE                    TABLE


Checking for DML replication

at source side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        14

at target side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        14

at source side

TEST1>>insert into emp select * from emp;

14 rows created.

TEST1>>commit;

Commit complete.

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        28

at target side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        28







Cheers!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Oracle ASM Disk Partitioning

Oracle ASM (automatic storage management) is a volume manager solution provided by Oracle Corporation.
In Oracle ASM we create ASM diskgroups for mirroring data between these diskgroups called as NORMAL and HIGH redundancy groups. These diskgroups are combination of individual disks to form an Oracle ASM diskgroups.

Before creating an ASM intance we should prepare the disks to be used as part of ASM disk group .
Following is the method you can follow in order to prepare OS disks that can be used for ASM diskgroups.

In following steps ,logical disks are created out of one disk called /dev/sdc. All the created disks are logical only. In following demonstration i have created disks of 1 GB sizes.

NORMAL redundancy group needs at least 2 failgroups and HIGH redundancy needs at least 3 Failgroups. 

[root@dba1 ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): d
No partition is defined yet!

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1-1305, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (124-1305, default 124):
Using default value 124
Last cylinder or +size or +sizeM or +sizeK (124-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (247-1305, default 247):
Using default value 247
Last cylinder or +size or +sizeM or +sizeK (247-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (370-1305, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (493-1305, default 493):
Using default value 493
Last cylinder or +size or +sizeM or +sizeK (493-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (616-1305, default 616):
Using default value 616
Last cylinder or +size or +sizeM or +sizeK (616-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (739-1305, default 739):
Using default value 739
Last cylinder or +size or +sizeM or +sizeK (739-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (862-1305, default 862): 
Using default value 862
Last cylinder or +size or +sizeM or +sizeK (862-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (985-1305, default 985):
Using default value 985
Last cylinder or +size or +sizeM or +sizeK (985-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1108-1305, default 1108):
Using default value 1108
Last cylinder or +size or +sizeM or +sizeK (1108-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1231-1305, default 1231):
Using default value 1231
Last cylinder or +size or +sizeM or +sizeK (1231-1305, default 1305): +1g
Value out of range.
Last cylinder or +size or +sizeM or +sizeK (1231-1305, default 1305):
Using default value 1305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

After creating above logical partitions ,Format these partitions so that all the data will be erased from the disks.After doing this all the disks will become candidates and eligible for use in failgroups.

[root@dba1 ~]# mkfs.vfat /dev/sdc5
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc6
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc7
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc8
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc9
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc10
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc11
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc12
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc13
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc14
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc15
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc16
mkfs.vfat 2.11 (12 Mar 2005)
/dev/sdc16: No such file or directory



Saturday 17 March 2012

Generate tablespace usage trend / growth in hourly manner

Common task of a dba is to monitor the space allocation in tablespaces as they fill and need to add more datafiles in tablespaces  to accommodate new data. so how can a dba guess that what should be enough size of new datafile for a entire day or week. If we have this kind of scripts which can tell us the trend of filling tablespaces then it will be easy for us to add sufficient size datafile. 

When we run this script ,its asks for inatance number. Give 1 if your database is single instance database. In case of RAC ,give the instance number.After instance number it will ask for the ID which is nothing but the TS# in v$tablespace view. In general the tablespace number for system tablespace is 0, for sysaux 1 and for undo its 2 and so on.

Major concept behind this script is DBA view dba_hist_tbspc_space_usage which keeps track of tablespace usage over the time.Here to make this script more useful and specific , v$tablespaces and dba_hist_snapshot tables are also included so that you can view the statistics of space usage of particular tablespace over the time.



SYS> SELECT distinct DHSS.SNAP_ID,VTS.NAME,TO_CHAR(DHSS.END_INTERVAL_TIME, 'DD-MM HH:MI') AS SNAP_Time,ROUND((DHTS.TABLESPACE_USEDSIZE*8192)/1024/1024)/&&max_instance_num AS USED_MB,ROUND((DHTS.TABLESPACE_SIZE*8192)/1024/1024)/&&max_instance_num AS SIZE_MB
FROM DBA_HIST_TBSPC_SPACE_USAGE DHTS,V$TABLESPACE VTS,DBA_HIST_SNAPSHOT DHSS
WHERE VTS.TS#=DHTS.TABLESPACE_ID AND DHTS.SNAP_ID=DHSS.SNAP_ID
AND DHSS.INSTANCE_NUMBER=1 AND TABLESPACE_ID=&id ORDER BY 1;


  SNAP_ID NAME                           SNAP_TIME   USED_MB    SIZE_MB
--------- ------------------------------              -----------            ----------      ----------
        1 USERS                                11-03 10:07         12         13
        2 USERS                                11-03 11:30         12         13
        3 USERS                                12-03 09:45         12         13
        4 USERS                                12-03 10:31         12         13
        5 USERS                                14-03 06:48         12         13
        6 USERS                                14-03 07:30         12         13
        7 USERS                                14-03 08:30         12         13
        8 USERS                                14-03 09:31         12         14
        9 USERS                                14-03 10:30         12         14
       10 USERS                               14-03 11:30         12         14
       11 USERS                               15-03 12:30         12         14

  SNAP_ID NAME                           SNAP_TIME      USED_MB SIZE_MB
--------- ------------------------------                      -----------       ----------    ----------
       12 USERS                                15-03 09:09         12         14
       13 USERS                                15-03 10:30         12         14
       14 USERS                                15-03 08:23         12         14
       15 USERS                                15-03 09:30         12         14
       16 USERS                                15-03 10:30         12         14
       17 USERS                                15-03 11:30         12         14
       18 USERS                                16-03 12:30         12         14
       19 USERS                                16-03 09:02         12         14
       20 USERS                                16-03 10:30         12         14
       21 USERS                                16-03 06:15         12         14
       22 USERS                                16-03 07:30         12         14

  SNAP_ID NAME                           SNAP_TIME      USED_MB SIZE_MB
--------- ------------------------------                -----------             ----------    ----------
       23 USERS                                16-03 08:30         12         14
       24 USERS                                16-03 09:30         12         14
       25 USERS                                16-03 10:30         12         14
       26 USERS                                17-03 08:55         12         14
       27 USERS                                17-03 09:30         12         14
       28 USERS                                17-03 10:30         12         14
       29 USERS                                17-03 06:13         12         14
       30 USERS                                17-03 07:30         12         14

30 rows selected.
 

Friday 16 March 2012

How to know percentage of buffers occupied by a particular segment in buffer cache

If you want to know that how much a particular segment has occupied the buffer cache area then following
sql statements can help you.

Step 1. find out which segment is configured to use default cache. you can use user_tables view to query like
following.

DHARAM>>select table_name,cache,buffer_pool from user_tables;

TABLE_NAME                     CACHE   BUFFER_
------------------------------ -----           -------
SALGRADE                           N            DEFAULT
BONUS                                  N            DEFAULT
EMP                                        N           DEFAULT
DEPT                                       N           DEFAULT


Step 2. Now find out the database object id so that we can uniquely identify database object of interest.Here i want to know about emp table object id. This step is important because more than one segments may have same name emp but they will have always unique object id.

DHARAM>>select data_object_id,object_type from user_objects where object_name=upper('emp');

DATA_OBJECT_ID   OBJECT_TYPE
--------------              -------------------
         69827               TABLE

Step 3. If you want to know how many buffers are occupied by which segment then query like following.you
can use V$BH view which keeps track of buffers occupied by segments.

DHARAM>>select count(*) from v$bh where objd=69827;

COUNT(*)
----------
         6

Step 4. Now issue the following statement to know the number of buffers in caches.

SYS>>select name,block_size,sum(buffers) from v$buffer_pool group by name,block_
size having sum(buffers)<>0;

NAME                 BLOCK_SIZE     SUM(BUFFERS)
-------------------- ----------           ------------
DEFAULT                      8192                    996
KEEP                            8192                   1494
RECYCLE                    8192                    6474

Here default cache is having 996 buffers out of which 6 buffers belong to emp segment. You can use following
statement to calculate the percentage of buffers belonging to emp segment in default cache.

SYS>>select (6/996)*100 "%of_buffer_in_buffer_cache" from dual;

%of_buffer_in_buffer_cache
--------------------------
                .602409639


Wednesday 7 March 2012

UNDO TABLESPACE AND PENDING OFFLINE STATUS

Undo tablespace concept was introduced in Oracle 9i version which stores before image of modified data in database .Undo data provides features like

1.Rollback of transactions,
2.Read consistency and
3.Flashback features.

Before undo tablespace concept we used to have Rollback segments which can provide rollback and read consistency . These rollback segments could not be used for flashback features. Rollback segments were complex  to use like we had to create them,give them user defined name and size, make them online manually.

In undo tablespace everything can be automatic. When we create undo tablespace then by default Oracle creates 10 undo segments and brings them online.

PENDING OFFLINE UNDO SEGMENTS 
 At a time we can have more than one Undo tablespace in our database but only one will be active (default) for entire database.All Undo segments of active undo tablespace will be online and all undo segments of nonactive undo tablespaces will offline.When we start any transaction then transaction is assigned any of the undo segments which has expired undo extents.Suppose we started any transaction and this transaction is using 7th undo segment in undo tablespace UNDOTBS. When transaction was going on then we switched our undo tablespace to new undo tablespace UNDO1. At this point all the undo segments of UNDOTBS undo tablespace will become offline except that which is holding current transaction,in our case it is 7th undo segment.Now all the undo segment of UNDO1 will become online.

SYS>>create undo tablespace undo1 datafile 'h:\oradata\ibm\undo1.dbf' size 5m;

Tablespace created.

SYS>>select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS
TEMP
USERS
UNDO1

6 rows selected.

SYS>>sho parameter undo_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS

SYS>>select usn,status from v$rollstat;

       USN STATUS
---------- ---------------
         0 ONLINE
         1 ONLINE
         2 ONLINE
         3 ONLINE
         4 ONLINE
         5 ONLINE
         6 ONLINE
         7 ONLINE
         8 ONLINE
         9 ONLINE
        10 ONLINE

11 rows selected.


From some other session

U1>>insert into emp select * from emp;

14 rows created.

From SYS session

SYS>>alter system set undo_tablespace=undo1;

System altered.

SYS>>sho parameter undo_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDO1

SYS>>select usn,status from v$rollstat;

       USN STATUS
---------- ---------------
         0 ONLINE
         7 PENDING OFFLINE
        11 ONLINE
        12 ONLINE
        13 ONLINE
        14 ONLINE
        15 ONLINE
        16 ONLINE
        17 ONLINE
        18 ONLINE
        19 ONLINE

       USN STATUS
---------- ---------------
        20 ONLINE

12 rows selected.





Sunday 4 March 2012

Difference among INSTANCE NAME , DATABASE NAME and DB_UNIQUE_NAME

INSTANCE NAME
SID and instance name are same. when we start any instance(oracle) then SGA is allocated and BG processes are started . This instance is the gateway in order to access the database which is stored in the form of datafiles on disk . We call this phenomenon with a name that is nothing but a instance name. starting instance is nothing but invoking oracle software for your own use. let me give you a example- suppose you want to watch a movie then you will invoke window media palyer first then you will access the movie file with this player.even if you don't invoke the player and directly click the movie file then also player is invoked first and then only you can watch the movie. so invoking the window media player is like invoking the instance (oracle s/w) and watching the movie is like accessing the database.The movie is stored in the form of movie file and database is stored in the form of datafiles.

DB_NAME
db_name  is the name of the database that is stored in form of OS level datafiles.  You can use the invoked instance in order to access this database . Hence SID name and database name can be different and instance name for the database can be changed  at any point of time if database shutdown is allowable. When ever you want to change the instance name of your database then export different SID and start the instance with this name and make sure that this name is included in name of your parameter file.

DB_UNIQUE_NAME
db_unique_name is just one more name(unique name) along with db_name of your database when the database participates in configuration with more than one database. This db_unique_name is used to uniquely identify the database among many databases in a network. many databases can have same db_name but can not have same db_unique_name. for example -In dataguard environment,primary and standby will have same db_name but will have different db_unique name so that each of the database can be uniquely identified while registering in RMAN as target or while adding in dataguard broker configuration.

Wednesday 29 February 2012

ORACLE DATABASE BLOCK

Oracle database block is the smallest unit of storage in which oracle can store the data.

Different OS platforms vary in size of OS block like Linux has OS block size as 1kb and solaris has OS block size 512 bytes. But Oracle does not consider OS block size as standard Blocksize for database. Oracle takes multiples of OS block size to create one block of Oracle . for example - In oracle 11g, 8 blocks of Linux OS are used to create one block of Oracle.

Oracle creates bigger blocks than the OS so that It can perform I/O operation more efficiently. Suppose  If multi block read count is set to 8 then in one fetch Oracle retrieves 8 blocks from disk that means 64kb of data. In dataware house Generally multi block read count value is set to a high value like 32 because there most of the SQL statements are SELECT commands.


Oracle can have different block sizes in the same database as 2k,4k,8k,16k and 32k depending on requirement.

In Oracle Block we have different parts like-

1.Variable Header- This part of block contains information about block like address of the block.
2.Table Directory- This part contains information about the table whose data is stored in this block.
3.Row directory-This part contains the address of the stored row in data block.
4.Row data-Actual row are stored in this part of the block
5.Free space-This part of the block is reserved for the future updates of the exiting rows.


If PCT USED is set to 40% and PCT FREE is set to 20% then to insert a new row in block more than 60% of the block should be free.

Saturday 25 February 2012

HOW TO START ORACLE INSTANCE WITHOUT SPFILE AND PFILE??

This is called a dummy instance which starts with a Memory_target value (automatic memory management). In this situation you need not to define size of any SGA components and Oracle takes care of SGA components sizes and PGA size . Generally we use this instance for restoring spfile from auto backup.

Remove all spfile and pfiles from dbs location and then you should have permission on O_H/rdbms/log directory

RMAN>startup force nomount;



DEMO

C:\Documents and Settings\amit>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 25 20:13:09 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>>startup force

ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'H:\APP\ORACLE\PRODUCT\11.1.0\DATABASE\
INITIBM.ORA'

SYS>>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Pr
oduction
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\Documents and Settings\amit>rman target /

Recovery Manager: Release 11.1.0.6.0 - Production on Sat Feb 25 20:13:51 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> startup force nomount

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'H:\APP\ORACLE\PRODUCT\11.1.0\DATABASE\
INITIBM.ORA'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area     159019008 bytes

Fixed Size                     1331852 bytes
Variable Size                 67112308 bytes
Database Buffers              83886080 bytes
Redo Buffers                   6688768 bytes

RMAN> exit


Recovery Manager complete.

C:\Documents and Settings\amit>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 25 20:17:31 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>>sho parameter spf

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string


Sunday 19 February 2012

RMAN "REPORT" COMMAND

RMAN REPORT command is very useful command in order to know the names of the files which need to be backed according to a specified retention policy. 
   
RMAN> report need backup ;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of files with less than 2 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1    1     H:\FINANCE\SYSTEM01.DBF
2    0     H:\FINANCE\UNDOTBS01.DBF
3    0     H:\FINANCE\SYSAUX01.DBF
4    0     H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP
5    0     H:\FINANCE\NOLOGTS.DBF

RMAN> report need backup recovery window of 2 days;

Report of files that must be backed up to satisfy 2 days recovery window
File Days  Name
---- ----- -----------------------------------------------------
2    2322  H:\FINANCE\UNDOTBS01.DBF
3    2322  H:\FINANCE\SYSAUX01.DBF
4    2322  H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP

RMAN> report need backup recovery window of 3 days;

Report of files that must be backed up to satisfy 3 days recovery window
File Days  Name
---- ----- -----------------------------------------------------
2    2322  H:\FINANCE\UNDOTBS01.DBF
3    2322  H:\FINANCE\SYSAUX01.DBF
4    2322  H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP

RMAN> report need backup redundancy 3;

Report of files with less than 3 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1    1     H:\FINANCE\SYSTEM01.DBF
2    0     H:\FINANCE\UNDOTBS01.DBF
3    0     H:\FINANCE\SYSAUX01.DBF
4    0     H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP
5    0     H:\FINANCE\NOLOGTS.DBF

RMAN> report need backup days 3;

Report of files whose recovery needs more than 3 days of archived logs
File Days  Name
---- ----- -----------------------------------------------------
2    2322  H:\FINANCE\UNDOTBS01.DBF
3    2322  H:\FINANCE\SYSAUX01.DBF
4    2322  H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP

RMAN> report need backup incremental 1;

Report of files that need more than 1 incrementals during recovery
File Incrementals Name
---- ------------ ----------------------------------------------


RMAN> report need backup incremental 0;

Report of files that need more than 0 incrementals during recovery
File Incrementals Name
---- ------------ ----------------------------------------------

RMAN> report need backup recovery window of 3 days database skip tablespace  nologts;

Report of files that must be backed up to satisfy 3 days recovery window
File Days  Name
---- ----- -----------------------------------------------------
2    2322  H:\FINANCE\UNDOTBS01.DBF
3    2322  H:\FINANCE\SYSAUX01.DBF
4    2322  H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP


RMAN> report need backup redundancy 3 datafile 3;

Report of files with less than 3 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
3    0     H:\FINANCE\SYSAUX01.DBF

RMAN> report need backup  tablespace  nologts;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of files with less than 2 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
5    0     H:\FINANCE\NOLOGTS.DBF


RMAN> report need backup incremental 2;

Report of files that need more than 2 incrementals during recovery
File Incrementals Name
---- ------------ ----------------------------------------------

RMAN> report need backup incremental 1;

Report of files that need more than 1 incrementals during recovery
File Incrementals Name
---- ------------ ----------------------------------------------

RMAN> report need backup device type disk;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of files with less than 2 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1    1     H:\FINANCE\SYSTEM01.DBF
2    0     H:\FINANCE\UNDOTBS01.DBF
3    0     H:\FINANCE\SYSAUX01.DBF
4    0     H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP
5    0     H:\FINANCE\NOLOGTS.DBF

RMAN> report need backup device type sbt;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of files with less than 2 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1    0     H:\FINANCE\SYSTEM01.DBF
2    0     H:\FINANCE\UNDOTBS01.DBF
3    0     H:\FINANCE\SYSAUX01.DBF
4    0     H:\RMAN\DATA_D-FINANCE_I-2590818756_TS-USERS_FNO-4_1IMVMG63.BKP
5    0     H:\FINANCE\NOLOGTS.DBF

RMAN> report need backup  tablespace  nologts device type disk;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of files with less than 2 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
5    0     H:\FINANCE\NOLOGTS.DBF

We can also use REPORT commands to know about the obsolete backups (with respect to a retention policy) which are not relevent. Suppose  you have set retention policy to redundancy 2 and have taken 5 database backups then first 3 backups are obsolete.
 

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Archive Log          1      30-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_30\O1_MF_1_8_7HVVHLTV_.ARC
Archive Log          2      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_9_7HX2XNFO_.ARC
Archive Log          3      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_10_7HY178WB_.ARC
Archive Log          4      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_11_7HYTDWTC_.ARC
Archive Log          5      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_12_7HZVFGKL_.ARC
Archive Log          6      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_13_7J02VNXS_.ARC
Archive Log          7      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_14_7J0JD298_.ARC
Archive Log          8      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_15_7J2CJVC7_.ARC
Archive Log          9      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_16_7J3HK6H4_.ARC
Archive Log          10     06-JAN-12          H:\FINANCE\ARCH\ARC00027_0770941832.001
Archive Log          11     06-JAN-12          H:\FINANCE\ARCH\ARC00028_0770941832.001
Archive Log          12     06-JAN-12          H:\FINANCE\ARCH\ARC00029_0770941832.001
Archive Log          13     07-JAN-12          H:\FINANCE\ARCH\ARC00030_0770941832.001
Datafile Copy        2      08-JAN-12          H:\RMAN2\DATA_D-FINANCE_I-2590818756_TS-SYSTEM_FNO-1_
32N08GGJ.BKP
Backup Set           98     08-JAN-12
  Backup Piece       263    08-JAN-12          H:\RMAN\3MN08P58_8_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       262    08-JAN-12          H:\RMAN\3MN08P58_7_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       261    08-JAN-12          H:\RMAN\3MN08P58_6_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       260    08-JAN-12          H:\RMAN\3MN08P58_5_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       259    08-JAN-12          H:\RMAN\3MN08P58_4_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       258    08-JAN-12          H:\RMAN\3MN08P58_3_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       257    08-JAN-12          H:\RMAN\3MN08P58_2_1.BKP
Backup Set           98     08-JAN-12
  Backup Piece       256    08-JAN-12          H:\RMAN\3MN08P58_1_1.BKP
Backup Set           97     08-JAN-12
  Backup Piece       255    08-JAN-12          H:\RMAN2\3NN08P58_1_1.BKP
Backup Set           99     08-JAN-12
  Backup Piece       264    08-JAN-12          H:\RMAN\C-2590818756-20120108-14.CTL

RMAN> report obsolete redundancy 3;

Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Archive Log          1      30-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_30\O1_MF_1_8_7HVVHLTV_.ARC
Archive Log          2      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_9_7HX2XNFO_.ARC
Archive Log          3      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_10_7HY178WB_.ARC
Archive Log          4      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_11_7HYTDWTC_.ARC
Archive Log          5      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_12_7HZVFGKL_.ARC
Archive Log          6      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_13_7J02VNXS_.ARC
Archive Log          7      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_14_7J0JD298_.ARC
Archive Log          8      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_15_7J2CJVC7_.ARC
Archive Log          9      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_16_7J3HK6H4_.ARC
Archive Log          10     06-JAN-12          H:\FINANCE\ARCH\ARC00027_0770941832.001
Archive Log          11     06-JAN-12          H:\FINANCE\ARCH\ARC00028_0770941832.001
Archive Log          12     06-JAN-12          H:\FINANCE\ARCH\ARC00029_0770941832.001
Archive Log          13     07-JAN-12          H:\FINANCE\ARCH\ARC00030_0770941832.001
Datafile Copy        2      08-JAN-12          H:\RMAN2\DATA_D-FINANCE_I-2590818756_TS-SYSTEM_FNO-1_
32N08GGJ.BKP

RMAN> report obsolete redundancy 5;

no obsolete backups found

RMAN> report obsolete redundancy 4;

no obsolete backups found

RMAN> show retention policy;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

RMAN> report obsolete redundancy 3;

Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Archive Log          1      30-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_30\O1_MF_1_8_7HVVHLTV_.ARC
Archive Log          2      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_9_7HX2XNFO_.ARC
Archive Log          3      31-DEC-11          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2011_12_31\O1_MF_1_10_7HY178WB_.ARC
Archive Log          4      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_11_7HYTDWTC_.ARC
Archive Log          5      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_12_7HZVFGKL_.ARC
Archive Log          6      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_13_7J02VNXS_.ARC
Archive Log          7      01-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_01\O1_MF_1_14_7J0JD298_.ARC
Archive Log          8      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_15_7J2CJVC7_.ARC
Archive Log          9      02-JAN-12          H:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\FINANCE\ARCHIVE
LOG\2012_01_02\O1_MF_1_16_7J3HK6H4_.ARC
Archive Log          10     06-JAN-12          H:\FINANCE\ARCH\ARC00027_0770941832.001
Archive Log          11     06-JAN-12          H:\FINANCE\ARCH\ARC00028_0770941832.001
Archive Log          12     06-JAN-12          H:\FINANCE\ARCH\ARC00029_0770941832.001
Archive Log          13     07-JAN-12          H:\FINANCE\ARCH\ARC00030_0770941832.001
Datafile Copy        2      08-JAN-12          H:\RMAN2\DATA_D-FINANCE_I-2590818756_TS-SYSTEM_FNO-1_
32N08GGJ.BKP

RMAN> report obsolete recovery window of 3 days;

no obsolete backups found

RMAN> report obsolete recovery window of 2 days;

no obsolete backups found

RMAN> report obsolete recovery window of 1 days;

no obsolete backups found