These referenced steps provide the guidelines for dealing
with query tuning problems.
Queries can run slowly for any number of reasons
1.      
Identify the query which causing the problem
We can identify the query in various methods (e.g OEM,
STATSPACK, sql query)
1.      
a) Find resource hungry SQL interactively as
follows
 
  | 
SELECT address, SUBSTR(sql_text,1,20) Text, buffer_gets, executions,buffer_gets/executions AVG
 FROM   v$sqlarea
 WHERE  executions  > 0
 AND    buffer_gets > 100000
 ORDER BY 5;
 | 
Note: the
'buffer_gets' value of > 100000 needs to be varied for the individual system
being tuned
Sample OUTPUT:
 
  | 
ADDRESS          TEXT                                                        
  BUFFER_GETS EXECUTIONS       
  AVG 
----------------
  ------------------------------------------------------------ -----------
  ---------- ---------- 
000000010514C048 BEGIN DBMS_SCHEDULER                                            
  4135073          9 459452.556 
000000016641A1E8 DELETE FROM PLANT_SC                                            
  6513536         14 465252.571 
000000017D292A08 call Sync_Notificati                                           
  34439016         65 529831.015 
00000000EA47E060 BEGIN DBMS_SCHEDULER                                            15370793         29 530027.345 
000000013872CB68 SELECT T.COI,                                                  
  67165717         78 861098.936 
00000000DCDEB490 /* MV_REFRESH (INS)                                            
  15187589         15 1012505.93 
00000000EA38CD78 MERGE INTO PLANT_SCH                                           
  17829484         17 1048793.18 
0000000105821DF0 call xxsap.Dashboard                                           
  25739712         17 1514100.71 
000000017D8952B0 SELECT T.COI,                                                  
  28207138         14 2014795.57 
0000000157FD77B8 select      
  st.coi,                                           
  57612995         28 2057606.96 
00000000EA233F18 select st.coi, p.cci                                             4117702          2   
  2058851 | 
The ADDRESS
value retrieved above can then be used to lookup the whole statement in the
v$sqltext view:
 
  | 
SELECT sql_text FROM v$sqltext WHERE address = '00000000EA233F18'
  ORDER BY piece; | 
Sample OUTPUT
 
  | 
SQL_TEXT 
---------------------------------------------------------------- 
select st.coi, p.ccid, sts.code AS STATUS, d.last_name || ', ' | 
| d.first_name || ' ' || d.middle_name AS PROVIDER, aph.name AS 
APH_SITE,    inf.name AS  INF_SITE, plant.name As PLANT, plant.i 
d AS PLANT_ID, CAST(aac.scheduled_start AS DATE) AS APH_COL_SCH_ 
START,    CAST(aac.actual_start
  AS DATE) AS APH_COL_ACT_START, a 
ac.start_tzd AS APH_COL_START_TZD, CAST(ar.scheduled_finish AS D 
ATE) AS APH_RINSEBACK_SCH_FINISH,   
  CAST(ar.actual_finish AS DA 
TE) AS APH_RINSEBACK_ACT_FINISH, ar.finish_tzd AS APH_RINSEBACK_ 
FINISH_TZD,   
  CAST(ait.scheduled_start AS DATE) AS IN_TRANS_SCH 
_START, CAST(ait.actual_start AS DATE) AS IN_TRANS_ACT_START, 
 ait.start_tzd AS IN_TRANS_START_TZD,
  CAST(ait.scheduled_finish | 
Note: Once query is identified we can tune to reduce the
resource usage
1.      
b) If the problem relates to CPU bound applications then CPU
information for each session can be examined to determine the culprits.
 
  | 
SELECT name ,statistic#  FROM   v$statname
 WHERE  name LIKE '%CPU%session';
 | 
Sample OUTPUT
 
  | 
NAME                                                            
  STATISTIC# 
----------------------------------------------------------------
  ---------- 
CPU used by this session                                                
  13 
IPC CPU used by this session                                            
  48 
global enqueue CPU used by this session                                  51 
gc CPU used by this session                                             169 | 
Then determine which session using most of the cpu.
 
  | 
SELECT * FROM v$sesstat WHERE statistic# = 13; | 
Sample OUTPUT
 
  | 
       SID STATISTIC#      VALUE 
---------- ---------- ---------- 
      1489         13          0 
      1490         13       3595 
      1493         13          0 
      1494         13          0 
      1496         13        717 
      1505         13       8138 
      1507         13       1989 
      1510         13       1278 
      1516         13   
      577 
      1517         13       1228 
      1518         13          0 | 
Look up details for the session
 
  | 
SELECT address ,SUBSTR(sql_text,1,20) Text, buffer_gets, executions,buffer_gets/executions AVG
 FROM   v$sqlarea a, v$session s
 WHERE  sid = 16
 AND    s.sql_address = a.address
 AND    executions > 0
 ORDER BY 5;
 | 
Use the v$sqltext to extract whole SQL text
Use autotrace to determine access paths
 
  | 
The AUTOTRACE system variable. 
   
 
SET AUTOTRACE OFF           -
  No AUTOTRACE report is generated. This is the default.   
SET AUTOTRACE ON EXPLAIN    -
  The AUTOTRACE report shows only the optimizer execution path.  
SET AUTOTRACE ON STATISTICS - The AUTOTRACE report shows only the SQL
  statement execution statistics.   
SET AUTOTRACE ON            -
  The AUTOTRACE report includes both the optimizer execution path and the SQL
  statement execution statistics.   
SET AUTOTRACE TRACEONLY     -
  Like SET AUTOTRACE ON, but suppresses the printing of the user's query
  output, if any. | 
Sample OUTPUT
 
  | 
SQL> SET AUTOTRACE ON  
SQL> /  
 
DNAME          ENAME             SAL JOB 
-------------- ---------- ---------- --------- 
ACCOUNTING     CLARK            2450 MANAGER  
ACCOUNTING     KING             5000 PRESIDENT  
ACCOUNTING     MILLER           1300 CLERK  
RESEARCH       SMITH             800 CLERK  
RESEARCH       ADAMS            1100 CLERK  
RESEARCH       FORD             3000 ANALYST  
RESEARCH       SCOTT            3000 ANALYST  
RESEARCH       JONES            2975 MANAGER  
SALES          ALLEN            1600 SALESMAN  
SALES          BLAKE            2850 MANAGER  
SALES          MARTIN           1250 SALESMAN  
SALES          JAMES             950 CLERK  
SALES          TURNER           1500 SALESMAN  
SALES          WARD             1250 SALESMAN  
 
14 rows selected.  
 
Execution Plan  
-----------------------------------------------------------  
0      SELECT STATEMENT
  Optimizer=CHOOSE  
1    0   MERGE JOIN  
2    1     SORT (JOIN)  
3    2       TABLE ACCESS (FULL) OF 'DEPT'  
4    1     SORT (JOIN)  
5    4       TABLE ACCESS (FULL) OF 'EMP'  
 
Statistics  
----------------------------------------------------------  
148  recursive calls  
  4  db block gets  
 24  consistent gets  
  6  physical reads  
 43  redo size  
591  bytes sent via SQL*Net to
  client  
256  bytes received via SQL*Net
  from client  
 33  SQL*Net roundtrips to/from client  
  2  sorts (memory)  
  0  sorts (disk)  
 14  rows processed | 
1.      
c)  Poor
disk contention/performance
Use STATSPACK/AWR - focusing on “TBS IO
stats” / “OS IO reports” are able to capture the statement activity 
We can generate AWR report from OEM for
selected snaps if we know the performance lagging time frame
We can monitor IO by setting 10046 level 8
trace to capture all the waits for a particular session
 
  | 
alter session set events '10046 trace name context
  forever, level 8'; | 
Excessing i/o can be found by examining the resultant trace
file and  looking for i/o related waits such as
 
  | 
'db file sequential read' (Single-Block i/o - Index,
  Rollback Segment or Sort)'db file scattered read' (Multi-Block i/o - Full
  table Scan).
 | 
Note: 
set TIMED_STATISTICS = TRUE to capture the timing information otherwise
comparisons will be meaningless.
 
1.      
d)  Unnecessary sorting:
The first question to ask is “Does the DATA
really need to be sorted” if sorting really needed try to allocate enough
memory to prevent the sorts from spilling disks causing an IO problems
 Sorting is very expensive operation
-         
High CPU usage
-         
Potentially large disk usage
Notes: 
-         
Indexes may be used to provided presorted data
-         
Sort merge joins inherently need to be sort
-         
Some sorts doesn’t needed sorts, in this case
explain plan should show NOSORT for this operation
-         
Increase the sort area size to promote in memory
sorts
-         
Modify query to process less rows (less to sort)
-         
Use sort_direct_writes to avoid flooding the
buffer cache with sort blocks
-         
If Pro*C 
use release_cursor=yes to free up temporary segments held open
1.      
e) Over parsing:
Over parsing implies that cursors are not being shared – if statements
are referenced multiple times share rather than fill up the shared pool with
the multiple copies of essentially the same statement
 Parsing is two types:
o  
Hard parsing – if a new sql statement is issued
which doesn’t exist in the shared pool, then this has to be parsed fully ,
Oracle has to allocate memory for the statement from shared pool to check
syntactically and semantically, it is expensive 
in terms of CPU usage and in the number of latch gets
o  
Soft Parsing- If a session issues a sql
statement which is already in the shared pool and it can be existed version
that statement
Shared pool:
Oracle keeps sql statements,
packages, object information and many other items in an area of SGA called
shared pool; this sharable area of memory is managed as a sophisticated cache
and heap manager rolled into one. It has 3 fundamental problems to over come
o  
The unit of memory allocation is not constant –
memory allocation from the pool can be bytes to kilobytes
o  
Not all memory can be “freed” when a user
finishes with it as the aim of the shared pool is to maximize the share ability
of information, - the information in the memory may be useful to another
session – Oracle cannot know in advance if the items will be use to anyone else
or not
o  
There is no disk area to page out to so that it
is not like a traditional cache where there is a file backing store, only re
creatable information can be discarded from the cache and it has to be re
created when next needed.
 We
can reduce the load on shared pool by below
 
  | 
Parse once execute many | 
This approach to use in OLTP type
  applications is to parse a statement only once and hold the curser open | 
  | 
Eliminating Literal SQL | 
 | 
  | 
Avoid
  Invalidations | 
 | 
  | 
CURSOR_SHARING parameter set to FORCE | 
 | 
  | 
SESSION_CACHED_CURSOR=nnn | 
Can be set at instance or session
  level | 
  | 
CURSOR_SPACE_FOR_TIME parameter | 
 | 
  | 
CLOSE_CACHED_OPEN_CURSORS | 
 | 
  | 
SHARED_POOL_RESERVED_SIZE | 
 | 
  | 
SHARED_POOL_RESERVED_MIN_ALLOC | 
 | 
  | 
SHARED_POOL_SIZE | 
 | 
  | 
_SQLEXEC_PROGRESSION_COST | 
Set it to 0 to avoid SQL sharing
  issues | 
  | 
Precompiler HOLD_CURSOR and RELEASE_CURSOR
  options | 
 | 
  | 
Pinning cursors in the shared pool  | 
 | 
  | 
DBMS_SHARED_POOL.KEEP | 
Objects should be KEPT immediately
  after instance startup | 
  | 
Flushing the SHARED POOL | 
 | 
  | 
DBMS_SHARED_POOL.PURGE | 
 | 
Diagnostics:
Once the statement has been determined then
the next stage is to the reason for the query running slow
 Gather the explain plan for both slow and not
slow queries and the best way of getting explain plan is to use AUTO TRACE
n 
Obtaining an Explain plan
-         
We can collect explain plan in manual method or using
SQLT (SQLTXPLAIN)
è
Manual methods of displaying the plan
Note: Oracle doesn’t support
EXPLAIN PLAN for statements performing implicit type conversion of date bind
variables, with bind variables in general, the EXPLAIN PLAN output might not
represent the real execution plan
a)     
 Get
SQL_ID and HASH_VALUE of the SQL (we can find the SQL_ID of a statement from
AWR or ASH report or select using V$SQL view)
 
  | 
SELECT /* TARGET SQL */ * FROM
  dual; 
 
SELECT sql_id, hash_value,
  substr(sql_text,1,40) sql_text   
FROM  v$sql  
WHERE sql_text like 'SELECT /*
  TARGET SQL */%' 
 
Example: 
 
SQL_ID        SQL_TEXT 
-------------
  ---------------------------------------- 
0xzhrtn5gkpjs SELECT /* TARGET SQL */
  * FROM dual | 
b)     
Last executed SQL: we can pull execution plans
from the library cache if the SQL has already been executed, get the plan of
the last executed SQL by following sql
 
  | 
SQL> set linesize 150SQL> set pagesize 2000
 SQL> select * from
  table(dbms_xplan.display_cursor(null,null, 'ALL'));
 | 
c)      
Gather_plan_statistics hint
 
  | 
SQL> select /*+ gather_plan_statistics */ col1, col2
  etc.....
 SQL> set linesize 150
 SQL> set pagesize 2000
 SQL> select * from
  TABLE(dbms_xplan.display_cursor('&SQL_ID', &CHILD,'ALL IOSTATS
  LAST'))
 | 
d)     
Plan from memory
 
  | 
For SQL ID :select * from
  table(dbms_xplan.display_cursor('&sql_id'));
 select * from
  table(dbms_xplan.display_cursor('&sql_id', NULL, 'ALL'));
 
 For SQL ID, Child Cursor :
 select * from
  table(dbms_xplan.display_cursor('&sql_id', &child_number, 'ALL'));
 
 For SQL Text :
 select t.*
 from v$sql s, table(dbms_xplan.display_cursor(s.sql_id,
  s.child_number, 'ALL')) t
 where s.sql_text like '%&querystring%';
 | 
e)     
From AWR
 
  | 
For SQL ID :select * from
  table(dbms_xplan.display_awr('&sql_id')) ;
 select * from table(dbms_xplan.display_awr('&sql_id',
  NULL, NULL, 'ALL')) ;
 
 For SQL ID, Plan Hash Value in the current database :
 select * from
  table(dbms_xplan.display_awr('&sql_id', '&plan_hash_value', NULL,
  'ALL')) ;
 
 For SQL ID, Plan Hash Value in a different database ID
  :
 select * from
  table(dbms_xplan.display_awr('&sql_id', '&plan_hash_value',
  &dbid, 'ALL')) ;
 
 For SQL ID that was executed during a certain period :
 select t.*
 from (select distinct sql_id, plan_hash_value, dbid
 from dba_hist_sqlstat
 where sql_id = '&sql_id'
 and snap_id between &begin_snap and &end_snap)
  s,
 table(dbms_xplan.display_awr(s.sql_id,
  s.plan_hash_value, s.dbid, 'ALL')) t;
 
 For SQL Text :
 select t.*
 from (select distinct r.sql_id sql_id,
  r.plan_hash_value plan_hash_value, r.dbid dbid
 from dba_hist_sqltext q, dba_hist_sqlstat r
 where q.sql_id = r.sql_id
 and q.sql_text like '%&querystring%') s,
 table(dbms_xplan.display_awr(s.sql_id,
  s.plan_hash_value, s.dbid, 'ALL')) t;
 | 
f)      
From SQL plan Baseline
 
  | 
For SQL Handle :select * from table(dbms_xplan.display_sql_plan_baseline('&sql_handle'));
 
 For SQL Handle, Plan Name :
 select * from
  table(dbms_xplan.display_sql_plan_baseline('&sql_handle',
  '&plan_name', 'ALL'));
 
 For SQL Text :
 select t.*
 from (select distinct sql_handle, plan_name
 from dba_sql_plan_baselines
 where sql_text like '%&querystring%') s,
 table(dbms_xplan.display_sql_plan_baseline(s.sql_handle,
  s.plan_name, 'ALL')) t;
 | 
è
Using SQLT Diagnostic tool 
There are 4 main ways to use SQLT
 
  | 
 
XPLAIN - Generates Explain
  Plan, SQL is not executed 
XTRACT- Generates Explain Plan
  for SQL that is already in memory or has been captured by AWR 
XECUTE -SQL is executed, then
  Execution Plan is obtained 
XTRXEC -This method combines the
  features of XTRACT and XECUTE. | 
è
Using enhanced explain plan script
 
  | 
   
    |  | 
Enhanced Explain Plan Script (Doc ID 104824.1) |  
 | 
Once we have explain plan, review alongside the query text looking for
anything unusual, the goal to identify any problem areas or anything that could
be modified to run more efficiently.
-         
Identify operations that may cause problems in
system , these are general hints
 
  | 
Online Transaction Processing (OLTP)
  system | 
Full Table Scans 
Hash or Sort merge joins 
Index fast full scans or (large) index
  range scans 
Parallel query 
Nested loop joins with high
  cardinality outer table | 
  | 
Batch/Warehouse system | 
Nested loop joins 
Index lookups | 
-         
Identify expensive operations: if the explain
plan has steps that show large cardinality or cost figures then these may be
areas where the biggest benefits of tuning can be reaped
2.      
Statistics and analysis:
The CBO requires accurate statistics to enable it to work
efficiently, the NUM_ROWS and LAST_ANALYZED columns in DBA_TABLES can be used
to determine the statistics information on objects.
Recommendations for
gathering CBO statistics: Different systems need different levels of statistical analysis due to
differences in data
o   The accuracy of the stats depends on
the sample size
o   The frequency of the gather statistics
really depends on how frequently the objects involved in change and statistics
have become inaccurate
o   Determine the best sample size by
gathering statistics using different sample sizes (5% sample is generally
sufficient for most of the tables although if possible 100% is suggested) 
o   Gather statistics on indexes with
compute only, since in index data already been sorted , compute on table is not
acceptable
o   When considering when to gather
statistics choose quite a period as possible, although gathering will work, if
you gather statistics on objects that are actively being updated and inserted
into, you are much more likely to encounter contention
o   It is recommended that users should
store critical statistics to allow them to revert back to a working
configuration in the event of a statistics change that significantly affects
application performance (from 10g onwards database retains 31 days statistics,
if gathering statistics causes critical statements to perform badly you can
revert to the pre-analyze stats - by default 31 days)
o   In DSS/ warehouse environment, queries
are generally not predictable so there is no stable environment/ query set to
upset by gathering statistics
3.      
Query text:
Review the query looking
for any constructs which may cause you problems, these are few areas we can
look for
o   Large INlists / statements
o   Outer joins – 
o   Set operators (UNION etc) – incorrect
usage of set operations can be very inefficient 
o   No where clause or missing join
predicates
o   Aggregate functions – cpu intensive
functions
o   Sorting issues
o   Implicit type conversion 
o   Any other strange or unexpected
constructs
o   Views, inline views or sub queries 
4.      
Finding an acceptable plan:
Examination of an explain
plan for the query can only give information on the query as it running
currently.
Trace statistics: 
I/O and timing stats (elapsed
and cpu) can be very useful in determining which part of a query is the root
cause of the problem.
Below are some potential causes for excessive I/O:
. Full table scans and
Index Fast Full Scans
. Index range or Full
scans
.Join order
.Excessive sort areas
Below are potential areas which cause high CPU:
Ø  Index range full scan
Ø  Join order
Ø  Excessive sort/hash areas
Ø  Nested Loops
5.      
Break Query down into components parts:
Break the query into its constituent parts and tune 
6.      
Application Knowledge:
Good to gather the information about how a particular query should run
based on application design
 Ref: Diagnostics for Query Tuning Problems (Doc ID 68735.1)