About Me

Friday 23 March 2012

ORACLE STREAMS REPLICATION AT SCHEMA LEVEL

Oracle Streams can be used to share data and messgaes between two oracle databases at different levels Unlike Oracle dataguard logical standby database ,Oracle streams will have two or more databases independent of each other and similar at replication level only. Concept of Oracle Streams is similar to logical standby but streams is more popular.
for example --

1.Table level ( 1 or more tables are replicated between two Oracle databases)
2.Schema level ( 1 or more schemas are replicated between two Oracle databases)
3.Tablespace level ( 1 or more tablespaces are replicated between two Oracle databases)
4.Database level ( changes in one database are replicated in another database)

Oracle streams is flow of information from one database to another database by capturing information at one database ,staged and applied to another database.Oracle uses advance queuing concept to capture,propagate and apply the the data between databases.


Oracle streams setup can be
1.Homogeneous ( between two oracle databases)
2.Heterogeneous (between oracle and non oracle databases)
 
In thes post i have replicated a schema called TEST1 between two databases and have taken

1. Source  database (SRCDB) on server DBA1
2.Target database (TGTDB) on server DBA2

On source database and target database do the follwing steps

--create a tablespace dedicated for stream

create tablespace strts datafile '/disk1/oradata/srcdir/strts.dbf' size 100m;

-- create stream admin user
create user stradm identified by stradm default tablespace strts;

-- grant dba and streams related privs to streams admin user

grant dba to stradm;
SYS>>exec dbms_streams_auth.grant_admin_privilege('STRADM');

At source and target databases

STRADM>>alter database add supplemental log data;

Database altered.


listner at source database

srclist =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = dba3)(PORT = 7546))
      )
    )
  )

SID_LIST_srclist =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = src)
      (ORACLE_HOME = /oraeng/app/oracle/product/11.2.0)
    )
  )

listener at target side

tgtlist =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = dba4)(PORT = 7589))
      )
    )
  )

SID_LIST_tgtlist=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = tgt)
      (ORACLE_HOME = /oraeng/app/oracle/product/11.2.0)
    )
  )

                                                         
tns at source

to_tgtlist =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST=dba4)(PORT=7589))
    )
    (CONNECT_DATA =
      (SID = tgt)
    )
  )

tns  at target side

to_srclist =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST=dba3)(PORT=7546))
    )
    (CONNECT_DATA =
      (SID = src)
    )
  )

streams parameter at source

###########source side streams parameters###################
db_unique_name=srcdb
log_archive_dest_1='location=/disk1/oradata/srcdir/arch valid_for=(online_logfiles,primary_role)mandatory'
log_archive_config='send,dg_config=(srcdb,tgtdb)'
log_archive_dest_2='service=to_tgtlist lgwr async noregister valid_for=(online_logfiles,primary_role) db_unique_name=tgtdb'
aq_tm_processes=1
shared_pool_size=400m
job_queue_interval=1
job_queue_processes=20
log_archive_dest_state_1=enable
log_archive_dest_state_2=defer
processes=150
statistics_level=typical
timed_statistics=true
streams_pool_size=500m
db_domain=DHARAM.com
log_archive_max_processes=10
global_names=true


target side parameter

############target streams parameters################
db_unique_name=tgtdb
log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(online_logfiles,primary_role)'
log_archive_config='receive,dg_config=(srcdb,tgtdb)'
log_archive_dest_2='location=/disk1/oradata/tgtdir/arch valid_for=(standby_logfiles,primary_role)'
aq_tm_processes=1
shared_pool_size=400m
job_queue_interval=1
job_queue_processes=20
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
processes=150
statistics_level=typical
timed_statistics=true
streams_pool_size=500m
db_domain=DHARAM.com
log_archive_max_processes=10
global_names=true
db_recovery_file_dest=/disk1/oradata/tgtdir/flash
db_recovery_file_dest_size=500m

on source side create password file and send to target side

[dharam@dba3 dbs]$ orapwd file=orapw$ORACLE_SID password=sys

[dharma@dba3 dbs]$ ls orapwsrc
orapwsrc

on target database

rename the password file according to instance name of target

[dharma@dba4 dbs]$ mv orapwsrc orapwtgt

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB

STRADM>>alter database rename global_name to tgtdb.DHARAM.com;


Database altered.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB.DHARAM.COM

on source side create database link which points to target database.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
TGTDB.DHARAM.COM

STRADM>>create database link TGTDB.DHARAM.COM connect to stradm identified by stradm using 'to_tgtlist';

STRADM>>select * from dual@TGTDB.DHARAM.COM;

D
-
X


STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
SRCDB

STRADM>>alter database rename global_name to srcdb.DHARAM.com;

Database altered.

STRADM>>select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
SRCDB.DHARAM.COM

on target database

STRADM>>create database link SRCDB.DHARAM.COM connect to stradm identified by stradm using 'to_srclist';

Database link created.

STRADM>>select * from dual@SRCDB.DHARAM.COM;

D
-
X

on source

[dharam@dba3 ~]$ mkdir /disk1/oradata/srcdir/strdir

STRADM>>create directory strdir as '/disk1/oradata/srcdir/strdir';

Directory created.

on target

[dharma@dba4 ~]$ mkdir /disk1/oradata/tgtdir/strdir

STRADM>>create directory strdir as '/disk1/oradata/tgtdir/strdir';

Directory created.

STRADM>>alter database add standby logfile ('/disk1/oradata/tgtdir/str1.log','/disk1/oradata/tgtdir/str2.log','/disk1/oradata/tgtdir/str3.log') size 10m;

STRADM>>select group#,sequence#,status from v$standby_log;

    GROUP#  SEQUENCE# STATUS
---------- ---------- ----------
         3          0 UNASSIGNED


in parameter file of source

log_archive_dest_state_2=enable

SYS>>startup force
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area  966729728 bytes
Fixed Size                  1340552 bytes
Variable Size             943721336 bytes
Database Buffers           16777216 bytes
Redo Buffers                4890624 bytes
Database mounted.
Database opened.

STRADM>>alter system switch logfile;

System altered.

STRADM>>/

System altered.

STRADM>>select dest_id,status,error from v$archive_dest_status;

SYS>>grant connect,resource to test1 identified by test1;

Grant succeeded.

TEST1>>@?/sqlplus/demo/demobld
Building demonstration tables.  Please wait.
Demonstration table build is complete.

on target side run the following

[dharma@dba4 ~]$ vi ms.sql

 begin
      dbms_streams_adm.maintain_schemas(
      schema_names=>'test1',
      source_database=>'SRCDB.DHARAM.COM',
      source_directory_object=>'strdir',
      destination_database=>'TGTDB.DHARAM.COM',
      destination_directory_object=>'strdir',
      capture_queue_name=>'DS_STREAMS_QUEUE',
      apply_queue_name=>'DS_STREAMS_QUEUE',
      bi_directional=>false,
     include_ddl=>false);
    end;

[dharma@dba4 ~]$ sqlplus stradm/stradm

SQL*Plus: Release 11.2.0.1.0 Production on Thu Mar 22 14:39:33 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

STRADM>>@ms.sql

PL/SQL procedure successfully completed.


STRADM>>select username from dba_users;

User-Name
----------
OUTLN
SYS
SYSTEM
TEST1
STRADM
APPQOSSYS
DBSNMP
DIP
ORACLE_OCM

9 rows selected.

TEST1>>select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
BONUS                          TABLE
DEPT                              TABLE
DUMMY                        TABLE
EMP                                TABLE
SALGRADE                    TABLE


Checking for DML replication

at source side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        14

at target side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        14

at source side

TEST1>>insert into emp select * from emp;

14 rows created.

TEST1>>commit;

Commit complete.

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        28

at target side

TEST1>>select count(*) from emp;

  COUNT(*)
----------
        28







Cheers!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Oracle ASM Disk Partitioning

Oracle ASM (automatic storage management) is a volume manager solution provided by Oracle Corporation.
In Oracle ASM we create ASM diskgroups for mirroring data between these diskgroups called as NORMAL and HIGH redundancy groups. These diskgroups are combination of individual disks to form an Oracle ASM diskgroups.

Before creating an ASM intance we should prepare the disks to be used as part of ASM disk group .
Following is the method you can follow in order to prepare OS disks that can be used for ASM diskgroups.

In following steps ,logical disks are created out of one disk called /dev/sdc. All the created disks are logical only. In following demonstration i have created disks of 1 GB sizes.

NORMAL redundancy group needs at least 2 failgroups and HIGH redundancy needs at least 3 Failgroups. 

[root@dba1 ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): d
No partition is defined yet!

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1-1305, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (124-1305, default 124):
Using default value 124
Last cylinder or +size or +sizeM or +sizeK (124-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (247-1305, default 247):
Using default value 247
Last cylinder or +size or +sizeM or +sizeK (247-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (370-1305, default 370):
Using default value 370
Last cylinder or +size or +sizeM or +sizeK (370-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (493-1305, default 493):
Using default value 493
Last cylinder or +size or +sizeM or +sizeK (493-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (616-1305, default 616):
Using default value 616
Last cylinder or +size or +sizeM or +sizeK (616-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (739-1305, default 739):
Using default value 739
Last cylinder or +size or +sizeM or +sizeK (739-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (862-1305, default 862): 
Using default value 862
Last cylinder or +size or +sizeM or +sizeK (862-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (985-1305, default 985):
Using default value 985
Last cylinder or +size or +sizeM or +sizeK (985-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1108-1305, default 1108):
Using default value 1108
Last cylinder or +size or +sizeM or +sizeK (1108-1305, default 1305): +1g

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (1231-1305, default 1231):
Using default value 1231
Last cylinder or +size or +sizeM or +sizeK (1231-1305, default 1305): +1g
Value out of range.
Last cylinder or +size or +sizeM or +sizeK (1231-1305, default 1305):
Using default value 1305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

After creating above logical partitions ,Format these partitions so that all the data will be erased from the disks.After doing this all the disks will become candidates and eligible for use in failgroups.

[root@dba1 ~]# mkfs.vfat /dev/sdc5
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc6
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc7
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc8
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc9
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc10
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc11
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc12
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc13
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc14
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc15
mkfs.vfat 2.11 (12 Mar 2005)
[root@dba1 ~]# mkfs.vfat /dev/sdc16
mkfs.vfat 2.11 (12 Mar 2005)
/dev/sdc16: No such file or directory



Saturday 17 March 2012

Generate tablespace usage trend / growth in hourly manner

Common task of a dba is to monitor the space allocation in tablespaces as they fill and need to add more datafiles in tablespaces  to accommodate new data. so how can a dba guess that what should be enough size of new datafile for a entire day or week. If we have this kind of scripts which can tell us the trend of filling tablespaces then it will be easy for us to add sufficient size datafile. 

When we run this script ,its asks for inatance number. Give 1 if your database is single instance database. In case of RAC ,give the instance number.After instance number it will ask for the ID which is nothing but the TS# in v$tablespace view. In general the tablespace number for system tablespace is 0, for sysaux 1 and for undo its 2 and so on.

Major concept behind this script is DBA view dba_hist_tbspc_space_usage which keeps track of tablespace usage over the time.Here to make this script more useful and specific , v$tablespaces and dba_hist_snapshot tables are also included so that you can view the statistics of space usage of particular tablespace over the time.



SYS> SELECT distinct DHSS.SNAP_ID,VTS.NAME,TO_CHAR(DHSS.END_INTERVAL_TIME, 'DD-MM HH:MI') AS SNAP_Time,ROUND((DHTS.TABLESPACE_USEDSIZE*8192)/1024/1024)/&&max_instance_num AS USED_MB,ROUND((DHTS.TABLESPACE_SIZE*8192)/1024/1024)/&&max_instance_num AS SIZE_MB
FROM DBA_HIST_TBSPC_SPACE_USAGE DHTS,V$TABLESPACE VTS,DBA_HIST_SNAPSHOT DHSS
WHERE VTS.TS#=DHTS.TABLESPACE_ID AND DHTS.SNAP_ID=DHSS.SNAP_ID
AND DHSS.INSTANCE_NUMBER=1 AND TABLESPACE_ID=&id ORDER BY 1;


  SNAP_ID NAME                           SNAP_TIME   USED_MB    SIZE_MB
--------- ------------------------------              -----------            ----------      ----------
        1 USERS                                11-03 10:07         12         13
        2 USERS                                11-03 11:30         12         13
        3 USERS                                12-03 09:45         12         13
        4 USERS                                12-03 10:31         12         13
        5 USERS                                14-03 06:48         12         13
        6 USERS                                14-03 07:30         12         13
        7 USERS                                14-03 08:30         12         13
        8 USERS                                14-03 09:31         12         14
        9 USERS                                14-03 10:30         12         14
       10 USERS                               14-03 11:30         12         14
       11 USERS                               15-03 12:30         12         14

  SNAP_ID NAME                           SNAP_TIME      USED_MB SIZE_MB
--------- ------------------------------                      -----------       ----------    ----------
       12 USERS                                15-03 09:09         12         14
       13 USERS                                15-03 10:30         12         14
       14 USERS                                15-03 08:23         12         14
       15 USERS                                15-03 09:30         12         14
       16 USERS                                15-03 10:30         12         14
       17 USERS                                15-03 11:30         12         14
       18 USERS                                16-03 12:30         12         14
       19 USERS                                16-03 09:02         12         14
       20 USERS                                16-03 10:30         12         14
       21 USERS                                16-03 06:15         12         14
       22 USERS                                16-03 07:30         12         14

  SNAP_ID NAME                           SNAP_TIME      USED_MB SIZE_MB
--------- ------------------------------                -----------             ----------    ----------
       23 USERS                                16-03 08:30         12         14
       24 USERS                                16-03 09:30         12         14
       25 USERS                                16-03 10:30         12         14
       26 USERS                                17-03 08:55         12         14
       27 USERS                                17-03 09:30         12         14
       28 USERS                                17-03 10:30         12         14
       29 USERS                                17-03 06:13         12         14
       30 USERS                                17-03 07:30         12         14

30 rows selected.
 

Friday 16 March 2012

How to know percentage of buffers occupied by a particular segment in buffer cache

If you want to know that how much a particular segment has occupied the buffer cache area then following
sql statements can help you.

Step 1. find out which segment is configured to use default cache. you can use user_tables view to query like
following.

DHARAM>>select table_name,cache,buffer_pool from user_tables;

TABLE_NAME                     CACHE   BUFFER_
------------------------------ -----           -------
SALGRADE                           N            DEFAULT
BONUS                                  N            DEFAULT
EMP                                        N           DEFAULT
DEPT                                       N           DEFAULT


Step 2. Now find out the database object id so that we can uniquely identify database object of interest.Here i want to know about emp table object id. This step is important because more than one segments may have same name emp but they will have always unique object id.

DHARAM>>select data_object_id,object_type from user_objects where object_name=upper('emp');

DATA_OBJECT_ID   OBJECT_TYPE
--------------              -------------------
         69827               TABLE

Step 3. If you want to know how many buffers are occupied by which segment then query like following.you
can use V$BH view which keeps track of buffers occupied by segments.

DHARAM>>select count(*) from v$bh where objd=69827;

COUNT(*)
----------
         6

Step 4. Now issue the following statement to know the number of buffers in caches.

SYS>>select name,block_size,sum(buffers) from v$buffer_pool group by name,block_
size having sum(buffers)<>0;

NAME                 BLOCK_SIZE     SUM(BUFFERS)
-------------------- ----------           ------------
DEFAULT                      8192                    996
KEEP                            8192                   1494
RECYCLE                    8192                    6474

Here default cache is having 996 buffers out of which 6 buffers belong to emp segment. You can use following
statement to calculate the percentage of buffers belonging to emp segment in default cache.

SYS>>select (6/996)*100 "%of_buffer_in_buffer_cache" from dual;

%of_buffer_in_buffer_cache
--------------------------
                .602409639


Wednesday 7 March 2012

UNDO TABLESPACE AND PENDING OFFLINE STATUS

Undo tablespace concept was introduced in Oracle 9i version which stores before image of modified data in database .Undo data provides features like

1.Rollback of transactions,
2.Read consistency and
3.Flashback features.

Before undo tablespace concept we used to have Rollback segments which can provide rollback and read consistency . These rollback segments could not be used for flashback features. Rollback segments were complex  to use like we had to create them,give them user defined name and size, make them online manually.

In undo tablespace everything can be automatic. When we create undo tablespace then by default Oracle creates 10 undo segments and brings them online.

PENDING OFFLINE UNDO SEGMENTS 
 At a time we can have more than one Undo tablespace in our database but only one will be active (default) for entire database.All Undo segments of active undo tablespace will be online and all undo segments of nonactive undo tablespaces will offline.When we start any transaction then transaction is assigned any of the undo segments which has expired undo extents.Suppose we started any transaction and this transaction is using 7th undo segment in undo tablespace UNDOTBS. When transaction was going on then we switched our undo tablespace to new undo tablespace UNDO1. At this point all the undo segments of UNDOTBS undo tablespace will become offline except that which is holding current transaction,in our case it is 7th undo segment.Now all the undo segment of UNDO1 will become online.

SYS>>create undo tablespace undo1 datafile 'h:\oradata\ibm\undo1.dbf' size 5m;

Tablespace created.

SYS>>select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS
TEMP
USERS
UNDO1

6 rows selected.

SYS>>sho parameter undo_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS

SYS>>select usn,status from v$rollstat;

       USN STATUS
---------- ---------------
         0 ONLINE
         1 ONLINE
         2 ONLINE
         3 ONLINE
         4 ONLINE
         5 ONLINE
         6 ONLINE
         7 ONLINE
         8 ONLINE
         9 ONLINE
        10 ONLINE

11 rows selected.


From some other session

U1>>insert into emp select * from emp;

14 rows created.

From SYS session

SYS>>alter system set undo_tablespace=undo1;

System altered.

SYS>>sho parameter undo_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDO1

SYS>>select usn,status from v$rollstat;

       USN STATUS
---------- ---------------
         0 ONLINE
         7 PENDING OFFLINE
        11 ONLINE
        12 ONLINE
        13 ONLINE
        14 ONLINE
        15 ONLINE
        16 ONLINE
        17 ONLINE
        18 ONLINE
        19 ONLINE

       USN STATUS
---------- ---------------
        20 ONLINE

12 rows selected.





Sunday 4 March 2012

Difference among INSTANCE NAME , DATABASE NAME and DB_UNIQUE_NAME

INSTANCE NAME
SID and instance name are same. when we start any instance(oracle) then SGA is allocated and BG processes are started . This instance is the gateway in order to access the database which is stored in the form of datafiles on disk . We call this phenomenon with a name that is nothing but a instance name. starting instance is nothing but invoking oracle software for your own use. let me give you a example- suppose you want to watch a movie then you will invoke window media palyer first then you will access the movie file with this player.even if you don't invoke the player and directly click the movie file then also player is invoked first and then only you can watch the movie. so invoking the window media player is like invoking the instance (oracle s/w) and watching the movie is like accessing the database.The movie is stored in the form of movie file and database is stored in the form of datafiles.

DB_NAME
db_name  is the name of the database that is stored in form of OS level datafiles.  You can use the invoked instance in order to access this database . Hence SID name and database name can be different and instance name for the database can be changed  at any point of time if database shutdown is allowable. When ever you want to change the instance name of your database then export different SID and start the instance with this name and make sure that this name is included in name of your parameter file.

DB_UNIQUE_NAME
db_unique_name is just one more name(unique name) along with db_name of your database when the database participates in configuration with more than one database. This db_unique_name is used to uniquely identify the database among many databases in a network. many databases can have same db_name but can not have same db_unique_name. for example -In dataguard environment,primary and standby will have same db_name but will have different db_unique name so that each of the database can be uniquely identified while registering in RMAN as target or while adding in dataguard broker configuration.