Thursday, June 30, 2011

Know Backups in Oracle

What is a traditional export?



I shall use "traditional export" to refer to an export created with the exp utility. This
is a user process that connects to the database instance through a server process over Oracle
Net. This is perfectly normal client-server processing: the exp user process generates SQL,
the server process executes it. The SQL generated is SELECT statements against the data
dictionary to extract the definitions of database objects, and SELECTs against tables to read
the rows they contain. This data is sent back to the exp user process, to be formatted as DDL
statements that can be used to create objects, and INSERT statements that can insert rows
into tables. These statements make up the export dump file, written by the exp utility to the
client machine.
For example, consider an export accomplished with this command:

exp scott/tiger@orcl tables=dept file=dept.dmp

Sending the generated file through a utility that will remove any non-printable characters
will give this (partial output only):

EXPORT:V11.02.00
USCOTT
RTABLES
2048
                                      Thu May 27 20:12:51 2010dept.dmp
#G##
#G##
+00:00
BYTE
UNUSED
INTERPRETED
DISABLE:ALL
METRICST
TABLE "DEPT"
CREATE TABLE "DEPT" ("DEPTNO" NUMBER(2, 0), "DNAME" VARCHAR2(14), "LOC" VARCHAR2(13))  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS
INSERT INTO "DEPT" ("DEPTNO", "DNAME", "LOC") VALUES (:1, :2, :3)
ACCOUNTING
NEW YORK
RESEARCH
DALLAS
SALES
CHICAGO
OPERATIONS
BOSTON
CREATE UNIQUE INDEX "PK_DEPT" ON "DEPT" ("DEPTNO" )  PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING

The critical lines are the CREATE TABLE statement, and the INSERT statement with its bind
variables followed by the values for the DNAME and LOC columns (note that the DEPTNO values
are not visible, because the nature of Oracle's encoding for number data types means that
they are not usually printable.) Examining this file makes it clear what a traditional import
will do: it will execute the statements it finds in the dump file, to create the objects and
insert rows into them.




What is a Data Pump export?



Functionally, a Data Pump export is similar to a traditional export. The output is a dump file
of information extracted from the data dictionary that can be used to create objects, followed
by rows to be inserted into tables. However, the implementation is different. The DDL commands
are not extracted by querying the data dictionary directly, but by using the DBMS_METADATA
package. The row data is not (as a rule) extracted with normal SELECT statements executed by
copying blocks into the database buffer cache, but rather by direct reads of the datafiles.
And most importantly, the work of a Data Pump export is not accomplished through a client-server
session but by server background processes: Oracle Net is not involved, and the dump file is
written to the server machine, not to the client. The expdp and impdp utilities are user
processes that connect over Oracle Net, but they don't do any work: they merely define and
control the Data Pump job.


For the purposes of this paper, traditional and Data Pump exports are the same: neither is a backup.


What is a backup?



A backup is a copy of the database files. It may be made with operating system utilities and
SQL*Plus (a user managed backup) or by RMAN, the Recovery Manager (a server managed backup.)
A backup may be open or closed (accomplished while the database open, or while it is closed),
and partial or whole (meaning either a subset of the files that make up the database, or the
entire file set). A further variation possible only with RMAN is full or incremental (all
blocks of a file, or only those blocks changed since a previous backup).


A user managed closed whole backup is three steps:
Copy the controlfile; copy the online logs; copy the datafiles.
A user managed open whole backup is also three steps:
ALTER DATABASE BACKUP CONTROLFILE; copy the datafiles, while their tablespace is in backup
mode; archive the online logs.
The need for an ALTER DATABASE command to backup the controlfile is to ensure read consistency.
As the controlfile is continually being updated, we must rely on Oracle to take a read consistent
snapshot of it.
The tablespaces must be in backup mode while being copied in order that read consistent copies
of changed blocks are written to the redo log, from where they can be extracted to replace any
blocks in the backup that are inconsistent due to their being over-written by the database
writer while being copied.
Archiving of the online redo logs is necessary to ensure that all the change vectors applied
to the database during the course of the backup are available, in order that all the files
can be brought to a consistent state.


A server managed closed whole backup is identical to user managed, except that RMAN will never
backup online logs: they will never be needed by RMAN because RMAN ensures that the datafiles
are consistent, or it will not back them up at all.
A server managed open backup is identical to user managed, except that RMAN does not place
tablespaces into backup mode. Being aware of the Oracle block formatting, RMAN can detect if
a block was updated while being copied, and will re-try until it reads a consistent version.



So why is an export not a backup?



The preceding descriptions of export and backup make this clear. While both export and backup
will write out the contents of the database, they do this in different ways. An export is table
oriented: individual objects and their rows are written out, one by one. The granularity of the
export is the table. A backup is file oriented: the granularity of the backup is the file. Why
does one need a backup? To protect the database against damage to files. A means is needed to
replace damaged files: extracting the file from a backup will do exactly that. An export can
never do this. Any one exported table may have had extents in many files, and any one file may
have extents of many tables; there is no way an import can replace one file, because of this
many-to-many relationship between tables and files.
So, you may say, what about a full export? Is that not a backup, since one can create a new
database and import every table? No, that is not good enough. First, there is the issue of
transactional consistency. Oracle's default isolation level ensures that every SELECT statement
will give consistent results, but it does not ensure that a series of SELECTs will give consistent
results. You can use CONSISTENT=Y in an attempt to get a consistent export, but you may need an
undo tablespace the size of Jupiter for this to succeed, and in any case CONSISTENT=Y is not
supported for SYS. But much worse is recovery: it cannot be done. There is no way to apply redo
to an imported database, because redo change vectors are block and file oriented: after a full
import, the block and file references will be meaningless.


So to conclude, export and import cannot restore and recover following damage to datafiles.
Backups are the only way to do this.

About Oracle Blockers

http://www.orafaq.com/node/854

Create a blocking lock

To begin, create a situation where one user is actively blocking another. Open two sessions. Issue the following commands in Session 1 to build the test table:




SQL> create table tstlock (foo varchar2(1), bar varchar2(1));

Table created.

SQL> insert into tstlock values (1,'a'); 

1 row created.

SQL> insert into tstlock values (2, 'b');

1 row created.

SQL> select * from tstlock ;

FOO BAR
--- ---
1   a
2   b

2 rows selected.

SQL> commit ;

Commit complete.




Now grab a lock on the whole table, still in Session 1:


SQL> select * from tstlock for update ;


And in Session 2, try to update a row:




SQL> update tstlock set bar=
  2  'a' where bar='a' ;




This statement will hang, blocked by the lock that Session 1 is holding on the entire table.




Identify the blocking session





Oracle provides a view, DBA_BLOCKERS, which lists the SIDs of all blocking sessions. But this view is often, in my experience, a good bit slower than simply querying V$LOCK, and it doesn't offer any information beyond the SIDs of any sessions that are blocking other sessions. The V$LOCK view is faster to query, makes it easy to identify the blocking session, and has a lot more information.




SQL> select * from v$lock ;

ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60        479 TX     131078      16739          0          6        685          0
ADDF7EC8 ADDF7EE0        422 TM      88519          0          3          0        697          0
ADDF7F74 ADDF7F8C        479 TM      88519          0          3          0        685          0
ADEBEA20 ADEBEB3C        422 TX     131078      16739          6          0        697          1
....     ....            ... ...      ....       ....       ....       ....        ....      ....




Note the BLOCK column. If a session holds a lock that's blocking another session, BLOCK=1. Further, you can tell which session is being blocked by comparing the values in ID1 and ID2. The blocked session will have the same values in ID1 and ID2 as the blocking session, and, since it is requesting a lock it's unable to get, it will have REQUEST > 0.


In the query above, we can see that SID 422 is blocking SID 479. SID 422 corresponds to Session 1 in our example, and SID 479 is our blocked Session 2.


To avoid having to stare at the table and cross-compare ID1's and ID2's, put this in a query:




SQL> select l1.sid, ' IS BLOCKING ', l2.sid
  2  from v$lock l1, v$lock l2
  3  where l1.block =1 and l2.request > 0
  4  and l1.id1=l2.id1
  5  and l1.id2=l2.id2
SQL> /

       SID 'ISBLOCKING'         SID
---------- ------------- ----------
       422  IS BLOCKING         479

1 row selected.




Even better, if we throw a little v$session into the mix, the results are highly readable:




SQL> select s1.username || '@' || s1.machine
  2  || ' ( SID=' || s1.sid || ' )  is blocking '
  3  || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
  4  from v$lock l1, v$session s1, v$lock l2, v$session s2
  5  where s1.sid=l1.sid and s2.sid=l2.sid
  6  and l1.BLOCK=1 and l2.request > 0
  7  and l1.id1 = l2.id1
  8  and l2.id2 = l2.id2 ;


BLOCKING_STATUS
----------------------------------------------------------------------------------------------------
BULKLOAD@yttrium ( SID=422 )  is blocking BULKLOAD@yttrium ( SID=479 )

1 row selected.




There's still more information in the v$lock table, but in order to read that information, we need to understand a bit more about lock types and the cryptically-named ID1 and ID2 columns.




Lock type and the ID1 / ID2 columns





In this case, we already know that the blocking lock is an exclusive DML lock, since we're the ones who issued the locking statement. But most of the time, you won't be so lucky. Fortunately, you can read this information from the v$lock table with little effort.


The first place to look is the TYPE column. There are dozens of lock types, but the vast majority are system types. System locks are normally only held for a very brief amount of time, and it's not generally helpful to try to tune your library cache, undo logs, etc. by looking in v$lock! (See the V$LOCK chapter in the Oracle Database Reference for a list of system lock types.)


There are only three types of user locks, TX, TM and UL. UL is a user-defined lock -- a lock defined with the DBMS_LOCK package. The TX lock is a row transaction lock; it's acquired once for every transaction that changes data, no matter how many objects you change in that transaction. The ID1 and ID2 columns point to the rollback segment and transaction table entries for that transaction.


The TM lock is a DML lock. It's acquired once for each object that's being changed. The ID1 column identifies the object being modified.




Lock Modes





You can see more information on TM and TX locks just by looking at the lock modes. The LMODE and REQUEST columns both use the same numbering for lock modes, in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock. A session must obtain an exclusive TX lock in order to change data; LMODE will be 6. If it can't obtain an exclusive lock because some of the rows it wants to change are locked by another session, then it will request a TX in exclusive mode; LMODE will be 0 since it does not have the lock, and REQUEST will be 6. You can see this interaction in the rows we selected earlier from v$lock:




ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60        479 TX     131078      16739          0          6        685          0
ADEBEA20 ADEBEB3C        422 TX     131078      16739          6          0        697          1




Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6), point back to the rollback and transaction entries for Session 1. That's what lets us determine the blocking session for Session 2.


You may also see TX locks in mode 4, Shared mode. If a block containing rows to be changed doesn't have any interested transaction list (ITL) entries left, then the session acquires a TX lock in mode 4 while waiting for an ITL entry. If you see contention for TX-4 locks on an object, you probably need to increase INITRANS for the object.


TM locks are generally requested and acquired in modes 3, aka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock. (Note that CREATE TABLE doesn't require a TM lock -- it doesn't need to lock any objects, because the object in question doesn't exist yet!) DML requires a Shared-Row Exclusive lock. So, in the rows we selected earlier from v$lock, you can see from the TM locking levels that these are DML locks:




ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
ADDF7EC8 ADDF7EE0        422 TM      88519          0          3          0        697          0
ADDF7F74 ADDF7F8C        479 TM      88519          0          3          0        685          0






Identifying the locked object





Now that we know that each TM row points to a locked object, we can use ID1 to identify the object.




SQL> select object_name from dba_objects where object_id=88519 ;

OBJECT_NAME
--------------
TSTLOCK




Sometimes just knowing the object is enough information; but we can dig even deeper. We can identify not just the object, but the block and even the row in the block that Session 2 is waiting on.




Identifying the locked row





We can get this information from v$session by looking at the v$session entry for the blocked session:




SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
  2* from v$session where sid=479 ;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
        88519             16          171309             0




This gives us the object ID, the relative file number, the block in the datafile, and the row in the block that the session is waiting on. If that list of data sounds familiar, it's because those are the four components of an extended ROWID. We can build the row's actual extended ROWID from these components using the DBMS_ROWID package. The ROWID_CREATE function takes these arguments and returns the ROWID:




SQL> select do.object_name,
  2  row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
  3  dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
  4  from v$session s, dba_objects do
  5  where sid=543
  6  and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;

OBJECT_NAME     ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# DBMS_ROWID.ROWID_C
--------------- ------------- -------------- --------------- ------------- ------------------
TSTLOCK                 88519             16          171309             0 AAAVnHAAQAAAp0tAAA




And, of course, this lets us inspect the row directly.




SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;

FOO BAR
--- ---
1   a






Conclusion





We've seen how to identify a blocking session, and how to inspect the very row that the waiting session is waiting for. And, I hope, learned a bit about v$lock in the process.

Tuesday, June 21, 2011

Oracle V$ views

v$controlfile: Displays the location and status of each controlfile in the database

Column Datatype Description
STATUS VARCHAR2(7) INVALID if the name cannot be determined, which should not occur. Null if the name can be determined.
NAME VARCHAR2(513) Name of the control file
IS_RECOVERY_DEST_FILE VARCHAR2(3) Indicates whether the file was created in the flash recovery area (YES) or not (NO)


v$Database: Displays information about the database from the control file.

For example, you can check (using log_mode) whether or not the database is in archivelog mode:

SQL>select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG
checkpoint_change# records the SCN of the last checkpoint.
switchover_status: can be used to determine if it is possible to perform a switchover operation Only available for physical standby databases. Can be:
  • NOT ALLOWED,
  • SESSIONS ACTIVE,
  • SWITCHOVER PENDING,
  • SWITCHOVER LATENT,
  • TO PRIMARY,
  • TO STANDBY or
  • RECOVERY NEEDED.
See protection modes in data guard for the columns protection_mode and protection_level.
database_role determines if a database is a primary or a logical standby database or a physical standby database.
force_logging tells if a database is in force logging mode or not.

 

Oracle Script 1: gives me access to all of the v_$* views


Here is a script for example that I use to "recreateme" -- recreate my account,
for testing purposes.  It gives me access to all of the v_$* views:

@connect "/ as sysdba"
set echo on
drop user ops$tkyte cascade;
create user ops$tkyte identified externally;
grant connect, dba to ops$tkyte;
alter user ops$tkyte default tablespace users;

begin
    for x in ( select object_name from user_objects where object_type = 'VIEW' and
               object_name like 'V\_$%' escape '\' )
    loop
        execute immediate 'grant select on ' || x.object_name || ' to ops$tkyte';
    end loop;
end;
/                                                 
@connect /
 
Source: asktom.oracle.com

Monday, June 20, 2011

Oracle DBMS_Profiler

PL/SQL developers are always trying to optimize their code to perform more efficiently.  As of Oracle 8.1.5, a utility exists that assists developers with this process, dbms_profiler, one of the most under-utilized utilities within Oracle.  

The basic idea behind profiling is for the developer to understand where their code is spending the most time, so they can detect and optimize it.  The profiling utility allows Oracle to collect data in memory structures and then dumps it into tables as application code is executed.  dbms_profiler is to PL/SQL, what tkprof and Explain Plan are to SQL. 
The profiling tools are not a part of the base installation of Oracle, so that will require more on the part of the developer.  Two tables need to be installed along with the Oracle supplied PL/SQL package. 

In the $ORACLE_HOME/rdbms/admin directory, two files exist that create the environment needed for the profiler to execute. 

·     proftab.sql - Creates three tables and a sequence and must be executed before the profload.sql file.
·     profload.sql - Creates the package header and package body for DBMS_PROFILER.  This script must be executed as the SYS user.
Once the environment is established, the three tables created by proftab.sql contain the vital information needed to benchmark PL/SQL performance.   Queries against these tables will provide the insight needed to optimize the PL/SQL code. 

The plsql_profiler_runs table contains information related to a profiling session.  Things, such as when the run was started, who started it, and how long the run lasted are contained in this table.  This table has the following important columns:

·     runid - This is the unique run identifier given to each profiler execution.
·     related_run - Runid of related run that can be called by the programmer.
·     run_owner - User who started the run.
·     run_date - Timestamp of the date of the run.
·     run_comment – User provided text concerning anything about this run that they wish to specify.  This is used mainly for documentation, since run_id is hard to remember.
·     run_total_time – Total elapsed time for this run.
The plsql_profiler_units table defines each PL/SQL component (unit) that was executed during a profiler run.  Benchmarks for each of the units are stored in this table in the following columns:

·     runid - References plsql_profiler_runs(runid).
·     unit_number - Internally generated library unit number.
·     unit_type - Library unit type (PACKAGE, PROCEDURE, etc).
·     unit_owner - Library unit owner name (the owner of the object).
·     unit_name - Library unit name (the name of the object as defined in the user_objects view).
·     unit_timestamp – Time when the unit was created.  The “unit”, being the procedural object (procedure, function, package).  This column holds the same data as the created column in the user_objects view.
·     total_time – Total time used by this unit for the given run.
The primary key for this table is runid, unit_number.

The plsql_profiler_data table is where the real performance benchmarks are stored.  This table contains the execution statistics for each line of code contained in our PL/SQL unit.  This table can be joined to the user_source view and can extract the actual line of code for each benchmark.  The primary key includes runid, unit_number, and line#.

The plsql_profiler_data table has the following important columns as indicated by the results of the following query:

select runid, unit_number, line#, total_occur, total_time,   
       min_time, max_time
from plsql_profiler_data;


     RUNID UNIT_NUMBER      LINE# TOTAL_OCCUR TOTAL_TIME   MIN_TIME   MAX_TIME
---------- ----------- ---------- ----------- ---------- ---------- ----------
         1           1          8           3   33284677     539733   28918759
         1           1         80           2    1134222     516266     617955
         1           1         89           0          0          0          0
         1           1         90           0          0          0          0
         1           1         92           0          0          0          0
         1           1         95           0          0          0          0
         1           1        103           0          0          0          0
         1           1        111           0          0          0          0
         1           1        112           0          0          0          0
         1           1        116           1    1441523    1441523    1441523
         1           1        119           0          0          0          0
         1           1        121           1    1431466    1431466    1431466
         1           1        123           1     136330     136330     136330
         1           1        132           1     978895     978895     978895
         1           1        140           0          0          0          0
         1           1        141           0          0          0          0
         1           1        143           0          0          0          0
         1           1        146           1    2905397    2905397    2905397
         1           1        152           2    1622552     574374    1048177
         1           1        153           0          0          0          0
         1           1        157           1     204495     204495     204495
         1           1        160           0          0          0          0

The line# above is used to tie these execution benchmarks back to a line of source in the user_source view.

The profload.sql file contains calls to two other files:

·     dbmspbp.sql This file creates the actual sys.dbms_profiler package.  This must be created as the SYS user which is the main drawback of this utility.
·     prvtpbp.plb This file creates the sys.dbms_profiler_lib library object and it is wrapped.  Again, this must be executed as the SYS user.
Figure 8.1 depicts the relationships between the three profiler tables, as well as the indirect relationship to the dba_source or user_source view (Source).  Note that everything begins with a RUN and drills down to the real performance data for a particular PL/SQL line of code. 


Figure 8.1 – Relationships between dbms_profiler tables
The environment is now configured, and the profiling utility is ready to be put to work.    
The profiler does not begin capturing performance information until the call to start_profiler is executed. 

SQL> exec dbms_profiler.start_profiler('Test of raise procedure by Scott');

PL/SQL procedure successfully completed.

The profiler captures data on a session-by-session basis.  This means that if the user SCOTT started the profiler by executing the command above, only PL/SQL objects that were executed and owned by SCOTT[M1] will be profiled, and consequently have data in the profiler tables described earlier.  The SCOTT user is only used as an example; it could be any database user.
Flushing Data during a Profiling Session
The flush command enables the developer to dump statistics during program execution without stopping the profiling utility.  The only other time Oracle saves data to the underlying tables is when the profiling session is stopped, as shown below: 

SQL> exec dbms_profiler.flush_data();

PL/SQL procedure successfully completed.

A developer could use the flush procedure with dbms_debug and step, line by line, through a procedure, flushing performance benchmarks along the way.   Or, if you have a very long running PL/SQL program, flushing data can be very useful in the performance tuning process.
Stopping a profiler execution is done after an adequate period of time of gathering performance benchmarks – determined by the developer.  Once the developer stops the profiler, all the remaining (unflushed) data is loaded into the profiler tables.

SQL> exec dbms_profiler.stop_profiler();

PL/SQL procedure successfully completed.

The dbms_profiler package also provides procedures that suspend and resume profiling (pause_profiler(), resume_profiler()).

Now that the profiler has stopped, the data is available for diagnostics from within Oracle, and we can begin working with it.
The profiler utility populates three tables with information, plsql_profiler_runs, plsql_profiler_units, and plsql_profiler_data.  Each “run” is initiated by a user and contains zero or more “units”.  Each unit contains “data” about its execution – the guts of the performance data benchmarks. 

The performance information for a line in a unit needs to be tied back to the line source in user_source.  Once that join is made, the developer will have all of the information that they need to optimize, enhance, and tune their application code, as well as the SQL.
To extract high-level data, including the length of a particular run, the script (profiler_runs.sql) below can be executed:

< profiler_runs.sql

column runid format 990
column type format a15
column run_comment format a20
column object_name format a20

select a.runid,
     substr(b.run_comment, 1, 20) as run_comment,
     decode(a.unit_name, '', '',
           substr(a.unit_name,1, 20)) as object_name,
     TO_CHAR(a.total_time/1000000000, '99999.99') as sec,
     TO_CHAR(100*a.total_time/b.run_total_time, '999.9') as pct
     from plsql_profiler_units a, plsql_profiler_runs b
     where a.runid=b.runid
     order by a.runid asc;


RUNID UNIT_NUMBER OBJECT_NAME          TYPE            SEC       PCT
----- ----------- -------------------- --------------- --------- ------
    1           1                                 .00     .0
    1           2                                1.01     .0
    1           3 BMC$PKKPKG           PACKAGE BODY      6921.55   18.2
    1           4                                 .02     .0
    2           1                                 .00     .0
    2           2                                 .01     .0

6 rows selected.


Note that anonymous PL/SQL blocks are also included in the profiler tables.  Anonymous blocks are less useful from a tuning perspective since they cannot be tied back to a source object in user_source.  Anonymous PL/SQL blocks are simply runtime source objects and do not have a corresponding dictionary object (package, procedure, function).  For this reason, the anonymous blocks should be eliminated from most reports.

From the data displayed above, the next step is to focus on the lines within the package body, testproc, that are taking the longest.  The script (profiler_top10_lines.sql) below displays the line numbers and their performance benchmarks of the top 10 worst performing lines of code.


< profiler_top10_lines.sql

select line#, total_occur, 
  decode (total_occur,null,0,0,0,total_time/total_occur/1000,0) as avg,
  decode(total_time,null,0,total_time/1000) as total_time,
  decode(min_time,null,0,min_time/1000) as min,
  decode(max_time,null,0,max_time/1000) as max
  from plsql_profiler_data
  where runid = 1  
  and unit_number = 3       -- testproc
  and rownum < 11           -- only show Top 10
  order by total_time desc ;


     LINE# TOTAL_OCCUR        AVG TOTAL_TIME        MIN        MAX
---------- ----------- ---------- ---------- ---------- ----------
       156           1              5008.457   5008.457   5008.457
        27           1               721.879    721.879    721.879
      2113           1               282.717    282.717    282.717
        89           1               138.565    138.565    138.565
      2002           1               112.863    112.863    112.863
      1233           1                94.984     94.984     94.984
        61           1                94.984     94.984     94.984
       866           1                94.984     94.984     94.984
       481           1                92.749     92.749     92.749
       990           1                90.514     90.514     90.514

10 rows selected.

Taking it one step further, the query below (profiler_line_source.sql) will extract the actual source code for the top 10 worst performing lines. 

< profiler_line_source.sql

 select line#,
  decode (a.total_occur,null,0,0,0,           
  a.total_time/a.total_occur/1000) as Avg,
  substr(c.text,1,20) as Source
  from plsql_profiler_data a, plsql_profiler_units b, user_source c
     where a.runid       = 1  
     and a.unit_number   = 3
     and a.runid         = b.runid
     and a.unit_number   = b.unit_number
     and b.unit_name     = c.name
     and a.line#         = c.line
     and rownum          < 11  
     order by a.total_time desc ;




     LINE#        AVG SOURCE
---------- ---------- --------------------
       156   5008.457   select sum(bytes) into reusable_var from dba_free_space;
        27    721.879   execute immediate dml_str USING  current_time
      2113    282.717   select OBJ#, TYPE# from SYS.OBJ$;
        89    138.565   OBJ_TYPES(BOBJ(I)) := BTYP(I);
      2002    112.863   select count(*) into reusable_var from dba_objects
      1233     94.984   delete from pkk_daily_activity
        61     94.984   update_stats_table(33, reusable_var, null);
       866     94.984   latest_executions := reusable_var - total_executions;
       481     92.749   time_number := hours + round(minutes * 100/60/100,2);
       990     90.514   update_stats_table(45, LOBS, null); 

10 rows selected.


Notice from the output above that most of the information needed to diagnose and fix PL/SQL performance issues is provided.  For lines containing SQL statements, the tuner can optimize the SQL perhaps by adding optimizer hints, eliminating full table scans, etc.  Consult Chapter 5 for more details on using tkprof utility to diagnose SQL issues.

Other useful scripts that are hidden within the Oracle directory structure ($ORACLE_HOME/PLSQL/DEMO) include a few gems that help report and analyze profiler information.  

·     profdemo.sql -A demo script for collecting PL/SQL profiler data.
·     profsum.sql - A collection of useful SQL scripts that are executed against profiler tables. 
·     profrep.sql – Creates views and a package (unwrapped) that populates the views based on the three underlying profiler tables. 
Best Practices for Using dbms_profiler  Everywhere
·     Wrap only for production - Wrapping code is desired for production environments but not for profiling.  It is much easier to see the unencrypted form of the text in our reports than it is to connect line numbers to source versions.  Use dbms_profiler before you wrap your code in a test environment, wrap it, and then put it in production.     
·     Eliminate system packages most of the time - Knowing the performance data for internal Oracle processing does not buy you much since you cannot change anything.  However, knowing the performance problem is within the system packages will save you some time of trying to tune your own code when the problem is elsewhere.
·     When analyzing lines of code, it is best to concentrate on the following:

·     Lines of code that are frequently executed - For example, a loop that executes 5000 times is a great candidate for tuning.  Guru Oracle tuners typically look for that “low hanging fruit” in which one line or a group of lines of code are executed much more than others.  The benefits of tuning one line of code that is executed often far outweigh tuning those lines that may cost more yet are executed infrequently in comparison.
·     Lines of code with a high value[M2] for average time executed – The minimum and maximum values of execution time are interesting although not as useful as the average execution time.  Min and max only tell us how much the execution time varies depending on database activity.  Line by line, a PL/SQL developer should focus on those lines that cost the most on an average execution basis.  dbms_profiler does not provide the average, but it does provide enough data to allow it to be computed (Total Execution Time / # Times Executed).
·     Lines of code that contain SQL syntax - The main resource consumers are those lines that execute SQL.  Once the data is sorted by average execution time, the statements that are the worst usually contain SQL.  Optimize and tune the SQL through utilities, such as Explain Plan, tkprof, and third party software.

Oracle Conditional Execution


CASE Expressions

From Oracle 8i one can use CASE statements in SQL. Look at this example:
SELECT ename, CASE WHEN sal = 1000 THEN 'Minimum wage'
                   WHEN sal > 1000 THEN 'Over paid'
                   ELSE 'Under paid'
              END AS "Salary Status"
FROM   emp;

DECODE() Function

The Oracle decode function acts like a procedural statement inside an 
SQL statement to return different values or columns based on the values of
other columns in the select statement. Examples:
select decode(sex, 'M', 'Male', 'F', 'Female', 'Unknown')
from   employees;
select a, b, decode( abs(a-b), a-b, 'a > b',
                               0,   'a = b',
                                    'a < b') from  tableX;
Note: The decode function is not ANSI SQL and is rarely implemented
in other RDBMS offerings. It is one of the good things about Oracle,
but use it sparingly if portability is required.

GREATEST() and LEAST() Functions

select decode( GREATEST(A,B), A, 'A is greater OR EQUAL than B', 
                                 'B is greater than A')...


select decode( GREATEST(A,B), 
               A, decode(A, B, 'A NOT GREATER THAN B', 'A GREATER THAN B'), 
               'A NOT GREATER THAN B')...

NVL() and NVL2() Functions

NVL and NVL2 can be used to test for NULL values.
NVL(a,b) == if 'a' is null then return 'b'.
SELECT nvl(ename, 'No Name') 
  FROM emp;
NVL2(a,b,c) == if 'a' is not null then return 'b' else return 'c'.
SELECT nvl2(ename, 'Do have a name', 'No Name') 
  FROM emp;

COALESCE() Function

COALESCE() returns the first expression that is not null. Example:
SELECT 'Dear '||COALESCE(preferred_name, first_name, 'Sir or Madam')
  FROM emp2;

NULLIF() Function

NULLIF() returns a NULL value if both parameters are equal in value. The following query would return NULL:
SELECT NULLIF(ename, ename)
  FROM emp;


Source: orafaq.com website.

Friday, June 17, 2011

Oracle Architecture Diagram

Oracle routine tasks

--> Clear Shared Pool
To clear the shared pool, issue the following:
Alter system flush shared_pool;
This does not clear shared SQL and PL/SQL areas for SQL statements, stored procedures, functions, packages or triggers that are currently being executed or for SQL SELECT statements for which all rows have not yet been fetched.
It may be desirable to clear the shared pool before beginning performance analysis.

--> Determine the amount of free space in the shared pool
To determine the amount of memory remaining in the shared pool:
Select s.name, s.bytes "Free Bytes",
Round((s.bytes/p.value)*100,2) "Perc Free",
p.value / (1024 * 1024) "SP Size MB"
from sys.v_$parameter p, sys.v_$sgastat s
where s.name = 'free memory'
and p.name = 'shared_pool_size';
This shows the total free space remaining in the shared pool, the percentage free and the total space. Alternately, the following shows how memory is being used in the shared pool:
select pool, name, sum(bytes) from v$sgastat
where pool like '%pool%'
group by rollup (pool, name);
-->

Obtaining Information about Index in Oracle

General details about the index can also be found by:

Analyze index  compute statistics;
Select * from user_indexes
    where index_name= ‘’;
To obtain further detail about an index:
Analyze index  validate structure;
The command:
Validate index ;
Performs the same function.
This places detailed information about the index in the table INDEX_STATS. This table can only contain one row, describing only the one index. This SQL also verifies the integrity of each data block in the index and checks for block corruption.
For example, to get the size of an index:
validate index ;
select name "INDEX NAME", blocks * 8192 "BYTES ALLOCATED",
    btree_space "BYTES USED",
    (btree_space / (blocks * 8192))*100 "PERCENT USED"
    from index_stats;
This assumes a block size of 8K (i.e. 8192 bytes). It shows the number of bytes allocated to the index and the number of bytes actually used.
Note that it does not confirm that each row in the table has an index entry or that each index entry points to a row in the table. To check this:
Analyze table validate structure cascade;