Understanding wait event helps during performance analysis

Identify & understanding wait event in Oracle during performance analysis

Main component in performance issue is response time. In what time oracle database return response to the user. example user fire select query in which response time user get the result of select query.

Response time = Processing time + wait time

Two main type of Contention in Oracle:

1. Contention of Transaction lock on a table row.
2. Contention for simultaneous requests for area of the shared pool. (latch contentions)

Find percentage of wait time / processing time

col metric_name for a30
select metric_name, value from v$sysmetric where metric_name in ('Database CPU Time Ratio','Database Wait Time Ratio')
and intsize_csec =(select max(INTSIZE_CSEC) from V$SYSMETRIC);

--------------------------- ----------
Database Wait Time Ratio 0
Database CPU Time Ratio 100.265895

On another Server:

------------------------------ ----------
Database Wait Time Ratio 56.1708528
Database CPU Time Ratio 43.8291472

Note: on First Server, its seems fine result zero wait time, but on anther server waiting time is more than processing time,
query shows a very high value for the Database Wait time then Ratio is much greater than the Database CPU Time Ratio, the database is spending more time waiting
than processing and you must dig deeper into the Oracle wait events to identify the specific wait events
causing this.

Oracle uses time model statistics

DB time is CPU time plus wait time for all sessions in AWR/statspack

Note: In time model statistic DB CPU consumes most of the DB time for the instance, it shows the database was actively processing most of the time.

Identify the SQL statements responsible for the most waits in Database:

Query rank queries that run in past 30 minutes, according to total time waited by each query:

-- for last 30 minutes
select ash.user_id, u.username, s.sql_text,sum(ash.wait_time + ash.time_waited) ttl_wait_time
from v$active_session_history ash, v$sqlarea s, dba_users u
where ash.sample_time between sysdate - 60/2880 and sysdate and ash.sql_id = s.sql_id and ash.user_id = u.user_id
group by ash.user_id,s.sql_text, u.username
order by ttl_wait_time

-- For last 24 hours
select ash.user_id, u.username, s.sql_text,sum(ash.wait_time + ash.time_waited) ttl_wait_time
from v$active_session_history ash, v$sqlarea s, dba_users u
where ash.sample_time between sysdate - 1 and sysdate and ash.sql_id = s.sql_id and ash.user_id = u.user_id
group by ash.user_id,s.sql_text, u.username
order by ttl_wait_time

Find out the wait event in Session:

select event, count(*) from v$session_wait group by event;

select event, state, seconds_in_wait from v$session_wait where sid = 10;

Check wait class time:

select wait_class, sum(time_waited), sum(time_waited)/sum(total_waits) sum_waits
from v$system_wait_class
group by wait_class
order by 3 desc;

-------------------- ---------------- ----------
Idle 1.5858E+10 78.9074156
Administrative 6729 55.6115702
Configuration 5896 17.6526946
Scheduler 87 2.63636364

Note: if Idle wait time is sufficient then database is healthy
If database has accumulated a very large wait time for the Application wait class, or the User I/O wait class

Further investigate the WAIT CLASS categories:

select a.event, a.total_waits, a.time_waited, a.average_wait
from v$system_event a, v$event_name b, v$system_wait_class c
where a.event_id=b.event_id
and b.wait_class#=c.wait_class#
and c.wait_class in ('Application','Concurrency')
order by average_wait desc;

db file sequential read event (indicates indexed reads) at the top of the wait event list. If high values for the db file sequential read wait event are due to a very large number of small indexed reads, it’s notreally a problem—this is natural in a database.
db file scattered read wait event is due to full table scans of large tables. If you experience this wait event, investigate the possibility of adding indexes to the table or tables

Drill down the contention wait & log file sync wait event

select a.sid, a.event, a.total_waits, a.time_waited, a.average_wait
from v$session_event a, v$session b
where time_waited > 0
and a.sid=b.sid
and b.username is not NULL
and a.event='enq: TX - row lock contention';

log file sync wait event can also be caused by too large a setting for the LOG_BUFFER initialization parameter.
Too large a value for the LOG_BUFFER parameter will lead the LGWR process to write data less frequently to the redo log files.
The database automatically calculates the value of the log_io_size parameter. By default, this parameter is set to one third of the log buffer size.

alter system set "_log_io_size"=1024000 scope=spfile;

Size of Redo check with LOG_BUFFER parameter.
show parameter log_buffer

Resolving Buffer Busy Waits

one User is simultaneously access the same data block caused high number of buffer busy waits i.e an inefficient query is reading
too many data blocks into the buffer cache, thus keeping in wait other sessions that want to access one or more of those same blocks.
Not only that, a query that reads too much data into the buffer cache may lead to the aging out of necessary blocks from the cache.
You must investigate queries that involve the segment causing the buffer busy waits with a view to reducing the number of data blocks they’re reading into the buffer cache.

1. Get the object id which causing buffer busy wait
select row_wait_obj# from v$session where event = 'buffer busy waits';

2. Get the object name and type:
select owner, object_name, subobject_name, object_type from dba_objects where data_object_id = &row_wait_obj;

3. Find the query related to objects and tune them.
— If have enough storage then you can use the KEEP POOL in the buffer cache
— Tried to avoid indexes of low cardinality means low number of unique values. It caused too many block reads
— Tried to reduce low cardinality indexes on database.

Reduce read by other session Wait Events

Read by other session wait event is responsible for the highest number of waits in awr/sp report.
Ready by other session wait means same block is needed to read by multiples sessions i.e may be table or index

1. you can find data blocks a session is waiting for by following query:

select p1 “file#”, p2 “block#”, p3 “class#” from v$session_wait where event = ‘read by other session’;

2. Find the exact segment (table or index)

select relative_fno, owner, segment_name, segment_type
from dba_extents where file_id = &file
and &block between block_id
and block_id + blocks – 1;

3. Once you find the segment name then
— identified the query using the object and tune them
— try to create a new tablespace with a smaller block size and move the segment to that tablespace.
— check if any low cardinality indexes are being used, because this type of an index will make the database read a large number of data blocks into the buffer cache.
— Try to replace low cardinality indexes with an index on a column with a high cardinality.

For more information regarding waits events and solution: Wait Events and Suggestions


DWM process consuming CPU in windows

DWM Process consuming CPU in Windows Platform

Desktop Window Manager (dwm.exe) is a compositing window manager that renders all those pretty effects in Windows: transparent windows, live taskbar thumbnails, Flip3D, and even high resolution monitor support.

On Working on the Windows Server, I found that dwm.exe process is consuming lot of CPU and made the system in hanged State.

On checking the net regarding it. It show one service name Desktop Window Manger need to be disable but i tested on my machine first before doing on porduction Server in Windows 7, Desktop Window Manager is present in the Service list. On stop the service, it refresh the monitor display but does not have much effect in resolution.

But when tried on Windows 2012 Server, Service is not present

For Windows 7 disable steps as follows
1. Click the Start button to open the Start menu.
2. In the Windows Search box, type services and click the Services link.
3. In the Services window, search and double-click Desktop Windows Manager.
4. In the Desktop Windows Manager window, Right click to disable the service.
5. If you want this process never to start when Windows loads, change the Startup type option from Automatic to Disabled.

On Windows 2012, Service is not present you need to apply bug fix as follows:


Constraints in database

Explanation of Constraints:

Not Null
The NOT NULL constraint enforces a column to NOT accept NULL values.
This enforces a field to always contain a value, which means that you cannot insert a new record, or update a record without adding a value to this field.


Notnull Constraint

Primary Key
The PRIMARY KEY constraint uniquely identifies each record in a database table.
Primary keys must contain UNIQUE values, and cannot contain NULL values.A table can have only one primary key, which may consist of single or multiple fields.


Primary Key

Unique Key
The UNIQUE constraint ensures that all values in a column are different.
Both the UNIQUE and PRIMARY KEY constraints provide a guarantee for uniqueness for a column or set of columns.
A PRIMARY KEY constraint automatically has a UNIQUE constraint.
However, you can have many UNIQUE constraints per table, but only one PRIMARY KEY constraint per table.


Unique Key

The CHECK constraint is used to limit the value range that can be placed in a column.
If you define a CHECK constraint on a single column it allows only certain values for this column.
If you define a CHECK constraint on a table it can limit the values in certain columns based on values in other columns in the row.


Check Constraint

Foreign Key
A FOREIGN KEY is a key used to link two tables together.
A FOREIGN KEY is a field (or collection of fields) in one table that refers to the PRIMARY KEY in another table. The table containing the foreign key is called the child table, and the table containing the candidate key is called the referenced or parent table


Foreign Key

Package DBMS SQL MONITOR used for SQL Monitoring Oracle

Package DBMS SQL MONITOR used for SQL Monitoring performance issue

DBMS_SQL_MONITOR for real time monitoring the SQL Statements.
You will get the real time execution for the query.
For using DBMS_SQL_MONIOR, you need to the STATISTICS_LEVEL parameter to be set to ‘TYPICAL’ or ‘ALL’, and the CONTROL_MANAGEMENT_PACK_ACCESS parameter set to ‘DIAGNOSTIC+TUNING’.
It’s mainly available in enterprise edition.

Steps to use the DBMS_SQL_MONITOR package for monitoring the SQL Statements:

1. Start the process of Monitoring with following package

variable v_sql_mon_beg number;
:v_sql_mon_beg := dbms_sql_monitor.begin_operation (dbop_name =>'QUERY1', dbop_eid => null , forced_tracking => 'Y');
PRINT :v_sql_mon_beg

2. Execute your SQL statement/load which need to be checked.

Select count(*) from ic.tran;

3. End the operation with following package:

Exec dbms_sql_monitor.end_operation (dbop_name => 'Query1' , dbop_eid => :v_sql_mon) ;

4. Fetch the monitoring report

SET LONG 1000000
SPOOL E:\report1.html
SELECT DBMS_SQL_MONITOR.report_sql_monitor(
dbop_name => 'Query1',
type => 'HTML',
report_level => 'ALL') AS report
FROM dual;

Pin object in Buffer Cache Memory of SGA in Oracle

Pin object in Buffer Cache Memory of SGA in Oracle

Buffer Cache is a part of the SGA memory. It is used to keep the recently used blocks of data into memory to reduce the IO of hard disk.

Buffer cache configure with three ways:
Default: It is used for storage data blocks in memory that have been retrieved from data files. Parameter db_cache_size defines the default buffer cache.
Keep: Cache is that it will hold frequently accessed blocks without ageing them out. It is managed by user for which object need to put in keep blocks. Parameter db_keep_cache_size is used to define.
Recycle: Those objects that you do not want to keep in memory. keep segments that are scanned rarely or are not useded frequently. Parameter defines as db_recycle_cache_size

In Oracle, Automatic memory management, the following view let you know the size of Memory utilization by different components:

select component ,current_size from v$memory_dynamic_components where current_size != 0;

select component, current_size from v$memory_dynamic_components where component = 'DEFAULT buffer cache';

For db_keep_cache_size you can allocate it minimum value for use. ( it is the Keep buffer cache parameter)

alter system set db_keep_cache_size=100m scope=spfile;

1. Pin the object in Keep buffer pool:

-- Creation time define the buffer pool parameter
create table ic.test1 (id number) tablespace users storage (buffer_pool keep);

create index ic.test_idx on test1(id) tablespace users storage (buffer_pool keep);

-- Pin the object later in Keep buffer pool
alter table ic.test1 storage ( buffer_pool keep);

alter index ic.test1 storage ( buffer_pool keep);

2. Check the buffer pool is keep or recycle bin

select BUFFER_POOL from dba_tables where OWNER='IC' AND TABLE_NAME='TEST1';

3. To bring it back to default from keep pool:

alter table ic.test1 storage(buffer_pool default);

4. Check the buffer pool is keep or recycle bin

select BUFFER_POOL from dba_tables where OWNER='IC' AND TABLE_NAME='TEST1';

Instance Parameter used in hints for particular SQL

Instance Parameter used in hints for particular SQL

Suppose you want to change and test the SQL query by changing the instance level parameters with ALTER SESSION command.
You find that particular query is working fine when parameter is changed but you can not change the parameter at system level in production because you donot know the impact to other queries.

So you can use the following undocumented parameter for that particular query for increase the perfomrance by OPT_PARAM parameter as a HINT in SQL Query.

You can change the following session parameter as hints for particular query:

Alter session set optimizer_features_enable = '';
Alter session set optimizer_index_caching=20;
Alter session set optimizer_index_cost_adj=20;
alter session set _optimizer_cost_model = 'io';
alter session set optimizer_mode = first_rows_10;


select /*+ opt_param('optimizer_mode','first_rows_10') */
select /*+ opt_param('_optimizer_cost_model','io') */
select /*+ opt_param('optimizer_index_cost_adj',20) */
select /*+ opt_param('optimizer_index_caching',20) */
select /*+ opt_param('optimizer_features_enable','')*/

Example for using the parameter as Hints in SQL query as follows:

select /*+ opt_param('optimizer_mode','first_rows_10') */ from fin.tran where entity_no = '10044' and data_transaction=sysdate;

Resize operation completed for file# tempfile alert log


Resize operation completed for file# tempfile alert log

On checking one issue on Oracle Database Server, find alert log file which show me the file 1001 is resized.
But on my Server only 10 files are exists in ORADATA folder.

Resize operation completed for file# 1001, old size 1024K, new size 2048K
Resize operation completed for file# 1001, old size 2048K, new size 3072K
Resize operation completed for file# 1001, old size 3072K, new size 4096K

On checking on net found that alert log done resize operation for datafiles and tempfiles

SQL> select count(*) from v$datafile;


SQL> select file# from v$tempfile;


Then on researched found that alert log consider the tempfile number according to dba_files parameter on oracle database.

Show parameter db_files

-------- -------- --------
db_files integer 1000

So, the alert log, its using tempfile numbering with db_files parameeter value plus temp file id (file#)
Suppose db_files is 500 then tempfile of file# id 1 is consider by alert log is 501.

That’s why alert log show 1001 value in above example as resize operation:
Resize operation completed for file# 1001, old size 1024K, new size 2048K