No great genius has ever existed without some touch of madness.

Monday 31 October 2011

TNSnames and LISTENER.ora

Oracle Networking – TNSnames , Listener.ora

        UNDERSTANDING NETWORK CONFIGURATION

Oracle Net is a software component that resides on the client and the Oracle database server Or Oracle DBA Machine. It is responsible for establishing and maintaining the connection between the client application and the server, as well as exchanging messages between them.
For the client application and a database to communicate, the client application must specify location details for the database it wants to connect to and the database must provide some sort of identification, or address.

        WHAT IS AN ORACLE DATABASE LISTENER?

On the database server side,an Oracle Database Listener is an Oracle Database Process which "listen" for for users (clients) connecting to the database. The listener process, either creates a dedicated server process for each user or to a shared server process that handles many users.

It is configured in a file named listener.ora, with a protocol address that identifies the database.
(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=my-server) (PORT=1521)))
This example shows a TCP/IP protocol address that specifies the host machine of the listener and a port number.


Here is an example of $ORACLE_HOME/network/admin/listener.ora:


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\Oracle_10g)
(PROGRAM = extproc)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = cata1)(PORT = 1521))
)
)
LISTENER2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = cata1)(PORT = 1522))
)
)
Here 2 listeners are configured for this server.

        CONFIGURE THE CLIENT
When a client configured with the same protocol address broadcasts a request for a connection, the listener on the server machine or Oracle DBA machine brokers the request and forwards it to the Oracle database.

The client uses a CONNECT DESCRIPTOR to specify the database it wants to connect to. This connect descriptor contains-- a protocol address and --a database service name.
A database can have multiple services defined, so a specific one must be specified for the connection. In the case of the preconfigured database that you installed, there is only one service, the name of which defaults to the global database name.
The following example shows a CONNECT DESCRIPTOR that enables client to connect to a database service called live.

SCOTT =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cata1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = live)
    )
  )



Connection Requests:-

Users initiate a connection request by providing a connect string. A CONNECT STRING includes a username and password, along with a connect identifier.

This connect identifier can be the connect descriptor itself, or a name that resolves to the connect descriptor. One of the most common connect identifiers is a net service name, a simple name for a service.

The following example shows a connect string that uses net service name live as the connect identifier

CONNECT sagar/verma@live




        SOME QUESTIONS:-

Q) If I configured many listeners on my server, how could I see which is used ?

Ans)Using LSNRCTL> show current listener command.
       Other useful commands:-
       lsnrctl status
       lsnrctl start
       lsnrctl stop


Q)How could I see if the listener is running ?

SWITCHOVER AND FAILOVER


SWITCHOVER/FAILOVER
·     
           Switchover   -   Role reversal between primary and standby databases. Old primary is now a standby. Used for hardware upgrades, OS upgrades, etc.

·        Failover   -   Primary is down with little hope of quick recovery (ie. hardware failure on primary machine).Once failover has been initiated , the old primary database is no longer useful.

STEPS OF SWITCHOVER

On primary database:

Sql> select switchover_status from v$database ;

Initiate switchover on primary database :

Sql> alter database commit to switchover to physical standby;

After the above statement completes the old primary is now a standby database.

While still on old primary shutdown database and restart as standby:

Sql>shu immediate;

Sql>startup nomount;

Sql>alter database mount standby database;

On the old standby system (new primary)

Sql> select switchover_status from v$database;

On new primary complete switchover then restart database:

Sql>alter database commit to switchover to primary;

Sql>shu immediate;

Sql>startup;

On new standby start managed recovery:

Sql>alter database recover managed standby database disconnect from session;

On new primary begin redo transport:

Sql>alter system archive log current;

Switchover complete.




STEPS OF FAILOVER

Manually register any redo logs with the standby that are available and that have not been applied.

If using standby redo logs, initiate failover with:

Sql> alter database managed standby database finish;

If not using standby redo logs:

Sql> Alter database recover managed standby database finish skip standby logfile;

Convert the Physical Standby to Primary (modify pfile before restart):

Initialization Parameters:

...
log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=prods'
log_archive_dest_2='service=prod valid_for=(online_logfiles,primary_role) db_unique_name=prod'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='DEFER'
fal_server='prod','prods'
fal_client='prods'
...

        
Sql>alter database commit to switchover to primary;

Sql>shu immediate;

Sql>startup;

Failover complete.

Configuring DATA GUARD ( with not even a single error)


 Database Steps


First, your primary database needs to be in ARCHIVE LOG MODE.
Both PRIMARY and STANDBY need to have same OS version and oracle database version.


Here, in my example
---PRIMARY DATABASE : prod
---STANDBY DATABASE : prods
---C:\prod_archives à location of prod archives
---C:\prods_archives àlocation of prods archives on another machine
---C:\rman_prod_backup àlocation of backup set and controlfiles on primary
      database machine.
---Install oracle on standby machine without starter database.


1.CONFIGURING PARAMETER FILE OF PRIMARY  DATABASE (prod)


Sql> create pfile=’C:\prod.ora’ from spfile;


My pfile is :-


prod.__db_cache_size=176160768
prod.__java_pool_size=4194304
prod.__large_pool_size=4194304
prod.__shared_pool_size=96468992
prod.__streams_pool_size=0
audit_file_dest='D:\oracle\product\10.2.0\admin\prod\adump'
background_dump_dest='D:\oracle\product\10.2.0\admin\prod\bdump'
compatible='10.2.0.1.0'
control_files='D:\oracle\product\10.2.0\oradata\prod\control01.ctl','D:\oracle\product\10.2.0\oradata\prod\control02.ctl','D:\oracle\product\10.2.0\oradata\prod\control03.ctl'
core_dump_dest='D:\oracle\product\10.2.0\admin\prod\cdump'
db_block_size=8192
db_cache_size=176160768
db_domain=''
db_file_multiblock_read_count=16
DB_FILE_NAME_CONVERT='C:\oracle\product\10.2.0\oradata\prods\','D:\oracle\product\10.2.0\oradata\prod\'
db_name='prod'
db_recovery_file_dest='D:\oracle\product\10.2.0\flash_recovery_area'
db_recovery_file_dest_size=2147483648
DB_UNIQUE_NAME='prod'
FAL_CLIENT='prod'
FAL_SERVER='prods'
INSTANCE_NAME='prod'
java_pool_size=4194304
job_queue_processes=10
large_pool_size=4194304
LOG_ARCHIVE_CONFIG='DG_CONFIG=(prod,prods)'
log_archive_dest_1='location=C:\prod_archives VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prod'
LOG_ARCHIVE_DEST_2='SERVICE=prods LGWR ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prods'
LOG_ARCHIVE_DEST_STATE_1='ENABLE'
LOG_ARCHIVE_DEST_STATE_2='ENABLE'
LOG_ARCHIVE_FORMAT='log%t_%s_%r.arc'
LOG_FILE_NAME_CONVERT='C:\oracle\product\10.2.0\oradata\prods\','D:\oracle\product\10.2.0\oradata\prod\'
open_cursors=300
pga_aggregate_target=95420416
processes=150
REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
SERVICE_NAMES='prod'


POINTS TO BE NOTED :-


---Use DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT if locations on  primary  database (prod) and standby database (prods) differ.


---LOG_ARCHIVE_DEST_1 declares the archiving location on primary database and LOG_ARCHIVE_DEST_2 declares where the logs have to be transferred.


---LOG_ARCHIVE_DEST_STATE_1 and LOG_ARCHIVE_DEST_STATE_2 need to be enabled , so as to transport logs from primary to standby.


2.CREATE AN INSTANCE ON STANDBY WITH SAME PASSWORD AS THAT OF PRIMARY DATABASE


Sql>  oradim –new  –sid  prods  –intpwd  sagar  –startmode  auto

3.CREATE DESIRED LOCATIONS(FOLDERS) ON STANDBY

4.CREATE PFILE FOR STANDBY DATABASE


prods.__db_cache_size=176160768
prods.__java_pool_size=4194304
prods.__large_pool_size=4194304
prods.__shared_pool_size=96468992
prods.__streams_pool_size=0
audit_file_dest='C:\oracle\product\10.2.0\admin\prods\adump'
background_dump_dest='C:\oracle\product\10.2.0\admin\prods\bdump'
compatible='10.2.0.1.0'
control_files='C:\oracle\product\10.2.0\oradata\prods\control01.ctl','C:\oracle\product\10.2.0\oradata\prods\control02.ctl','C:\oracle\product\10.2.0\oradata\prods\control03.ctl'
core_dump_dest='C:\oracle\product\10.2.0\admin\prods\cdump'
db_block_size=8192
db_cache_size=176160768
db_domain=''
db_file_multiblock_read_count=16
DB_FILE_NAME_CONVERT='D:\oracle\product\10.2.0\oradata\prod\','C:\oracle\product\10.2.0\oradata\prods\'
db_name='prod'
db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
db_recovery_file_dest_size=2147483648
DB_UNIQUE_NAME='prods'
FAL_CLIENT='prods'
FAL_SERVER='prod'
INSTANCE_NAME='prods'
java_pool_size=4194304
job_queue_processes=10
large_pool_size=4194304
LOG_ARCHIVE_CONFIG='DG_CONFIG=(prod,prods)'
log_archive_dest_1='location=C:\prods_archives VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prods'
LOG_ARCHIVE_DEST_2='SERVICE=prod LGWR ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prod'
LOG_ARCHIVE_DEST_STATE_1='ENABLE'
LOG_ARCHIVE_DEST_STATE_2='ENABLE'
LOG_ARCHIVE_FORMAT='log%t_%s_%r.arc'
LOG_FILE_NAME_CONVERT='D:\oracle\product\10.2.0\oradata\prod\','C:\oracle\product\10.2.0\oradata\prods\'
open_cursors=300
pga_aggregate_target=95420416
processes=150
REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
SERVICE_NAMES='prods'
sga_target=287309824
shared_pool_size=96468992
STANDBY_FILE_MANAGEMENT='AUTO'
streams_pool_size=0
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='C:\oracle\product\10.2.0\admin\prods\udump'

5.CONFIGURE tnsnames.ora AND listener.ora ON PRIMARY DATABASE AND STANDBY DATABASE

6.TAKING BACKUP OF PRIMARY DATABASE USING RMAN


ON PRIMARY :


SQL> set oracle_sid=prod
SQL> RMAN
RMAN> connect target;
RMAN> backup database include current controlfile for standby plus archivelog;

7.COPY ALL BACKUP AND ARCHIVES FROM PRIMARY DATABASE TO STANDBY DATABASE


--copy C:\rman_prod_backup and C:\prod_archives from primary to standby at same location with same name i.e C:\rman_prod_backup and C:\prod_archives.

8.CREATING STANDBY DATABASE FROM BACKUP TAKEN AT PRIMARY DATABASE


Before doing it, our primary database must be open and standby database must be not mounted.


At primary :


 Sql >startup mount pfile = ‘C:\prod.ora’;
 Sql>alter database open;
 Sql>create spfile from pfile = ‘C:\prod.ora’;


At standby :


Sql>startup nomount pfile = ‘C:\prods.ora’;


At standby open another cmd window :
Sql > set_oracle_sid = prods
Sql>rman
Rman> connect target sys/sagar@prod auxiliary sys/sagar@prods
Rman>duplicate target database for standby nofilenamecheck dorecover;

SOME USEFUL COMMANDS :-

Sql> select sequence# , first_time , next_time from v$archived_log;
Sql> select sequence# , applied from v$archived_log order by sequence#;
Sql> select current_scn from v$database;
Sql>select archived_thread# , archived_seq# , applied_thread# , applied_seq# from  v$archive_dest_status;
Sql> select max(sequence#) from v$log_history;

NOTE:  if logs are not being applied on standby database, then use:
Sql> alter database recover managed standby database disconnect from session;


Data Pump


Data pump utility is useful when you need to clone a database, refresh the development database or move the databases between different operating systems.
Another big advantage is the network export feature, using which you can export the data to the target server directly using data filters and metadata filters; without having to create the dump files.
The Data Pump export utility can be invoked via two methods:
By using the expdp command from the command prompt
By using “Export to Export Files” under data movement section of the OEM.

There are 4 export modes available in the data pump.
Database – Export the entire database
Schemas – You can selectively export the required schema rather everything
Tables – If you are looking to restore few sets of tables
Tablespace – this option is same like Tables export but exports all of the tables from any tablespace.



Parallel Full Export and Import:
Assuming you have created DIR1, DIR2 directory objects and you want each file be 2 GB in size.
$ expdp
FULL=y
PARALLEL=2
DUMPFILE=DIR1:exp1%U.dmp, DIR2:exp2%U.dmp
FILESIZE=2G
The %u implies that multiple files may be generated and start at 01 with a two-digital number.

Now, you can import to different directory (REMOTE).
$ impdp
DIRECTORY=remote
PARALLEL=2
DUMPFILE=exp1%U.dmp, exp2%U.dmp


You can also move objects from one tablespace to another by using the REMAP_TABLESPACE option.
Example:
$ impdp
SCHEMAS=iself
REMAP_TABLESPACE=iself_tablespace:urself_tablespace


database
expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log
Note: increase the parallel option based on the number of CPU's you have
impdp system/password full=y dumpfile=data.dmp nologfile=y


Using Data Pump Export from Command Prompt
expdp username/password@servicename DIRECTORY=dpump DUMPFILE=dumpfile.DMP
FULL=y LOGFILE=logfile.LOG

AWR and ADDM - best explanation


The AUTOMATIC WORKLOAD repository is a source of information for several other Oracle 10g features including:
-Automatic Database Diagnostic Monitor
-SQL Tuning Advisor
-Undo Advisor
-Segment Advisor

ADDM analyze snapshots taken by AWR.

Automatic database diagnostic monitoring is enabled by default and is controlled by the CONTROL_MANAGEMENT_PACK_ACCESS and the STATISTICS_LEVEL initialization parameters.

The CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter should be set to DIAGNOSTIC+TUNING

The STATISTICS_LEVEL initialization parameter should be set to the TYPICAL (default) or ALL to enable automatic database diagnostic monitoring.

Setting the DBIO_EXPECTED Parameter:::
ADDM analysis of I/O performance partially depends on a single argument, DBIO_EXPECTED
The value of DBIO_EXPECTED is the average time it takes to read a single database block, in microseconds

Set the value one time for all subsequent ADDM executions::
SQL>EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER('ADDM', 'DBIO_EXPECTED', 8000);

Managing AWR Snapshots::::
By default, the Automatic Workload Repository (AWR) generates snapshots of performance data once every hour,
and retains the statistics in the workload repository for 8 days.
You can change the default values for both the snapshot interval and the retention period.
The data in the snapshot interval is analyzed by ADDM. ADDM compares the difference between snapshots.

Modifying Snapshot Settings::::
By default, AWR generates snapshots of performance data once every hour. Alternatively,
you can modify the default values of both the interval between snapshots and their retention period.

EXAMPLE:-
-- This causes the repository to refresh every 15 minutes
-- and retain all data for 2 weeks i.e 20160 min.
sql>Exec dbms_workload_repository.modify_snapshot_settings(retention=>20160, interval=> 15);
now,see
sql> select DBID,SNAP_INTERVAL,RETENTION from dba_hist_wr_control;
sql>@E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\awwrpti.sql

ADDM report has following sections:

     Problem: Finding that describe the root cause of database performance issue.

     Symptom: Finding that contain information that often lead to one or more problem finding.

     Information: Findings that are used to report non-problem area of the system.

     Recommendations: Composed of action & rationales.

SQL> Execute DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER('ADDM','DBIO_EXPECTED',8000);
SQL> @E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\addmrpt.sql
sql>desc DBA_ADVISOR_FINDINGS:
This view displays all the findings and symptoms that the diagnostic monitor encountered along with the specific recommendation.

DBA_ADVISOR_RECOMMENDATIONS:
This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each run. The recommendations should be looked at in the order of the RANK column, as this relays the magnitude of the problem for the recommendation. The BENEFIT column gives the benefit to the system you can expect after the recommendation is carried out.