DB Namephp |
DB Id算法 |
Instancesql |
Inst num数据库 |
Release安全 |
RACsession |
Host数据结构 |
ICCI并发 |
1314098396socket |
ICCI1ide |
1 |
10.2.0.3.0 |
YES |
HPGICCI1 |
|
Snap Id |
Snap Time |
Sessions |
Cursors/Session |
Begin Snap: |
2678 |
25-Dec-08 14:04:50 |
24 |
1.5 |
End Snap: |
2680 |
25-Dec-08 15:23:37 |
26 |
1.5 |
Elapsed: |
|
78.79 (mins) |
|
|
DB Time: |
|
11.05 (mins) |
|
|
DB Time不包括Oracle后台进程消耗的时间。若是DB Time远远小于Elapsed时间,说明数据库比较空闲。
在79分钟里(其间收集了3次快照数据),数据库耗时11分钟,RDA数据中显示系统有8个逻辑CPU(4个物理CPU),平均每一个CPU耗时1.4分钟,CPU利用率只有大约2%(1.4/79)。说明系统压力很是小。
但是对于批量系统,数据库的工做负载老是集中在一段时间内。若是快照周期不在这一段时间内,或者快照周期跨度太长而包含了大量的数据库空闲时间,所得出的分析结果是没有意义的。这也说明选择分析时间段很关键,要选择可以表明性能问题的时间段。
|
Begin |
End |
|
|
Buffer Cache: |
3,344M |
3,344M |
Std Block Size: |
8K |
Shared Pool Size: |
704M |
704M |
Log Buffer: |
14,352K |
显示SGA中每一个区域的大小(在AMM改变它们以后),可用来与初始参数值比较。
shared pool主要包括library cache和dictionary cache。library cache用来存储最近解析(或编译)后SQL、PL/SQL和Java classes等。library cache用来存储最近引用的数据字典。发生在library cache或dictionary cache的cache miss代价要比发生在buffer cache的代价高得多。所以shared pool的设置要确保最近使用的数据都能被cache。
|
Per Second |
Per Transaction |
Redo size: |
918,805.72 |
775,912.72 |
Logical reads: |
3,521.77 |
2,974.06 |
Block changes: |
1,817.95 |
1,535.22 |
Physical reads: |
68.26 |
57.64 |
Physical writes: |
362.59 |
306.20 |
User calls: |
326.69 |
275.88 |
Parses: |
38.66 |
32.65 |
Hard parses: |
0.03 |
0.03 |
Sorts: |
0.61 |
0.51 |
Logons: |
0.01 |
0.01 |
Executes: |
354.34 |
299.23 |
Transactions: |
1.18 |
|
% Blocks changed per Read: |
51.62 |
Recursive Call %: |
51.72 |
Rollback per transaction %: |
85.49 |
Rows per Sort: |
######## |
显示数据库负载概况,将之与基线数据比较才具备更多的意义,若是每秒或每事务的负载变化不大,说明应用运行比较稳定。单个的报告数据只说明应用的负载状况,绝大多数据并无一个所谓“正确”的值,然而Logons大于每秒1~2个、Hard parses大于每秒100、所有parses超过每秒300代表可能有争用问题。
Redo size:每秒/每事务产生的redo大小(单位字节),可标志数据库任务的繁重程序。
Logical reads:每秒/每事务逻辑读的块数
Block changes:每秒/每事务修改的块数
Physical reads:每秒/每事务物理读的块数
Physical writes:每秒/每事务物理写的块数
User calls:每秒/每事务用户call次数
Parses:SQL解析的次数
Hard parses:其中硬解析的次数,硬解析太多,说明SQL重用率不高。
Sorts:每秒/每事务的排序次数
Logons:每秒/每事务登陆的次数
Executes:每秒/每事务SQL执行次数
Transactions:每秒事务数
Blocks changed per Read:表示逻辑读用于修改数据块的比例
Recursive Call:递归调用占全部操做的比率
Rollback per transaction:每事务的回滚率
Rows per Sort:每次排序的行数
注:
Oracle的硬解析和软解析
提到软解析(soft parse)和硬解析(hard parse),就不能不说一下Oracle对sql的处理过程。当你发出一条sql语句交付Oracle,在执行和获取结果前,Oracle对此sql将进行几个步骤的处理过程:
一、语法检查(syntax check)
检查此sql的拼写是否语法。
二、语义检查(semantic check)
诸如检查sql语句中的访问对象是否存在及该用户是否具有相应的权限。
三、对sql语句进行解析(parse)
利用内部算法对sql进行解析,生成解析树(parse tree)及执行计划(execution plan)。
四、执行sql,返回结果(execute and return)
其中,软、硬解析就发生在第三个过程里。
Oracle利用内部的hash算法来取得该sql的hash值,而后在library cache里查找是否存在该hash值;
假设存在,则将此sql与cache中的进行比较;
假设“相同”,就将利用已有的解析树与执行计划,而省略了优化器的相关工做。这也就是软解析的过程。
诚然,若是上面的2个假设中任有一个不成立,那么优化器都将进行建立解析树、生成执行计划的动做。这个过程就叫硬解析。
建立解析树、生成执行计划对于sql的执行来讲是开销昂贵的动做,因此,应当极力避免硬解析,尽可能使用软解析。
Buffer Nowait %: |
100.00 |
Redo NoWait %: |
100.00 |
Buffer Hit %: |
98.72 |
In-memory Sort %: |
99.86 |
Library Hit %: |
99.97 |
Soft Parse %: |
99.92 |
Execute to Parse %: |
89.09 |
Latch Hit %: |
99.99 |
Parse CPU to Parse Elapsd %: |
7.99 |
% Non-Parse CPU: |
99.95 |
本节包含了Oracle关键指标的内存命中率及其它数据库实例操做的效率。其中Buffer Hit Ratio 也称Cache Hit Ratio,Library Hit ratio也称Library Cache Hit ratio。同Load Profile一节相同,这一节也没有所谓“正确”的值,而只能根据应用的特色判断是否合适。在一个使用直接读执行大型并行查询的DSS环境,20%的Buffer Hit Ratio是能够接受的,而这个值对于一个OLTP系统是彻底不能接受的。根据Oracle的经验,对于OLTPT系统,Buffer Hit Ratio理想应该在90%以上。
Buffer Nowait表示在内存得到数据的未等待比例。
buffer hit表示进程从内存中找到数据块的比率,监视这个值是否发生重大变化比这个值自己更重要。对于通常的OLTP系统,若是此值低于80%,应该给数据库分配更多的内存。
Redo NoWait表示在LOG缓冲区得到BUFFER的未等待比例。若是过低(可参考90%阀值),考虑增长LOG BUFFER。
library hit表示Oracle从Library Cache中检索到一个解析过的SQL或PL/SQL语句的比率,当应用程序调用SQL或存储过程时,Oracle检查Library Cache肯定是否存在解析过的版本,若是存在,Oracle当即执行语句;若是不存在,Oracle解析此语句,并在Library Cache中为它分配共享SQL区。低的library hit ratio会致使过多的解析,增长CPU消耗,下降性能。若是library hit ratio低于90%,可能须要调大shared pool区。
Latch Hit:Latch是一种保护内存结构的锁,能够认为是SERVER进程获取访问内存数据结构的许可。要确保Latch Hit>99%,不然意味着Shared Pool latch争用,可能因为未共享的SQL,或者Library Cache过小,可以使用绑定变动或调大Shared Pool解决。
Parse CPU to Parse Elapsd:解析实际运行时间/(解析实际运行时间+解析中等待资源时间),越高越好。
Non-Parse CPU :SQL实际运行时间/(SQL实际运行时间+SQL解析时间),过低表示解析消耗时间过多。
Execute to Parse:是语句执行与分析的比例,若是要SQL重用率高,则这个比例会很高。该值越高表示一次解析后被重复执行的次数越多。
In-memory Sort:在内存中排序的比率,若是太低说明有大量的排序在临时表空间中进行。考虑调大PGA。
Soft Parse:软解析的百分比(softs/softs+hards),近似看成sql在共享区的命中率,过低则须要调整应用使用绑定变量。
|
Begin |
End |
Memory Usage %: |
47.19 |
47.50 |
% SQL with executions>1: |
88.48 |
79.81 |
% Memory for SQL w/exec>1: |
79.99 |
73.52 |
Memory Usage %:对于一个已经运行一段时间的数据库来讲,共享池内存使用率,应该稳定在75%-90%间,若是过小,说明Shared Pool有浪费,而若是高于90,说明共享池中有争用,内存不足。
SQL with executions>1:执行次数大于1的sql比率,若是此值过小,说明须要在应用中更多使用绑定变量,避免过多SQL解析。
Memory for SQL w/exec>1:执行次数大于1的SQL消耗内存的占比。
Event |
Waits |
Time(s) |
Avg Wait(ms) |
% Total Call Time |
Wait Class |
CPU time |
|
515 |
|
77.6 |
|
SQL*Net more data from client |
27,319 |
64 |
2 |
9.7 |
Network |
log file parallel write |
5,497 |
47 |
9 |
7.1 |
System I/O |
db file sequential read |
7,900 |
35 |
4 |
5.3 |
User I/O |
db file parallel write |
4,806 |
34 |
7 |
5.1 |
System I/O |
这是报告概要的最后一节,显示了系统中最严重的5个等待,按所占等待时间的比例倒序列示。当咱们调优时,总但愿观察到最显著的效果,所以应当从这里入手肯定咱们下一步作什么。例如若是‘buffer busy wait’是较严重的等待事件,咱们应当继续研究报告中Buffer Wait和File/Tablespace IO区的内容,识别哪些文件致使了问题。若是最严重的等待事件是I/O事件,咱们应当研究按物理读排序的SQL语句区以识别哪些语句在执行大量I/O,并研究Tablespace和I/O区观察较慢响应时间的文件。若是有较高的LATCH等待,就须要察看详细的LATCH统计识别哪些LATCH产生的问题。
在这里,log file parallel write是相对比较多的等待,占用了7%的CPU时间。
一般,在没有问题的数据库中,CPU time老是列在第一个。
更多的等待事件,参见本报告 的Wait Events一节。
|
Begin |
End |
Number of Instances: |
2 |
2 |
|
Per Second |
Per Transaction |
Global Cache blocks received: |
4.16 |
3.51 |
Global Cache blocks served: |
5.97 |
5.04 |
GCS/GES messages received: |
408.47 |
344.95 |
GCS/GES messages sent: |
258.03 |
217.90 |
DBWR Fusion writes: |
0.05 |
0.05 |
Estd Interconnect traffic (KB) |
211.16 |
|
Buffer access - local cache %: |
98.60 |
Buffer access - remote cache %: |
0.12 |
Buffer access - disk %: |
1.28 |
Avg global enqueue get time (ms): |
0.1 |
Avg global cache cr block receive time (ms): |
1.1 |
Avg global cache current block receive time (ms): |
0.8 |
Avg global cache cr block build time (ms): |
0.0 |
Avg global cache cr block send time (ms): |
0.0 |
Global cache log flushes for cr blocks served %: |
3.5 |
Avg global cache cr block flush time (ms): |
3.9 |
Avg global cache current block pin time (ms): |
0.0 |
Avg global cache current block send time (ms): |
0.0 |
Global cache log flushes for current blocks served %: |
0.4 |
Avg global cache current block flush time (ms): |
3.0 |
Avg message sent queue time (ms): |
0.0 |
Avg message sent queue time on ksxp (ms): |
0.3 |
Avg message received queue time (ms): |
0.5 |
Avg GCS message process time (ms): |
0.0 |
Avg GES message process time (ms): |
0.0 |
% of direct sent messages: |
14.40 |
% of indirect sent messages: |
77.04 |
% of flow controlled messages: |
8.56 |
Statistic Name |
Time (s) |
% of DB Time |
DB CPU |
514.50 |
77.61 |
sql execute elapsed time |
482.27 |
72.74 |
parse time elapsed |
3.76 |
0.57 |
PL/SQL execution elapsed time |
0.50 |
0.08 |
hard parse elapsed time |
0.34 |
0.05 |
connection management call elapsed time |
0.08 |
0.01 |
hard parse (sharing criteria) elapsed time |
0.00 |
0.00 |
repeated bind elapsed time |
0.00 |
0.00 |
PL/SQL compilation elapsed time |
0.00 |
0.00 |
failed parse elapsed time |
0.00 |
0.00 |
DB time |
662.97 |
|
background elapsed time |
185.19 |
|
background cpu time |
67.48 |
|
此节显示了各类类型的数据库处理任务所占用的CPU时间。
Back to Wait Events Statistics
Back to Top
Wait Class |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
User I/O |
66,837 |
0.00 |
120 |
2 |
11.94 |
System I/O |
28,295 |
0.00 |
93 |
3 |
5.05 |
Network |
1,571,450 |
0.00 |
66 |
0 |
280.72 |
Cluster |
210,548 |
0.00 |
29 |
0 |
37.61 |
Other |
81,783 |
71.82 |
28 |
0 |
14.61 |
Application |
333,155 |
0.00 |
16 |
0 |
59.51 |
Concurrency |
5,182 |
0.04 |
5 |
1 |
0.93 |
Commit |
919 |
0.00 |
4 |
4 |
0.16 |
Configuration |
25,427 |
99.46 |
1 |
0 |
4.54 |
Back to Wait Events Statistics
Back to Top
Event |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
SQL*Net more data from client |
27,319 |
0.00 |
64 |
2 |
4.88 |
log file parallel write |
5,497 |
0.00 |
47 |
9 |
0.98 |
db file sequential read |
7,900 |
0.00 |
35 |
4 |
1.41 |
db file parallel write |
4,806 |
0.00 |
34 |
7 |
0.86 |
db file scattered read |
10,310 |
0.00 |
31 |
3 |
1.84 |
direct path write |
42,724 |
0.00 |
30 |
1 |
7.63 |
reliable message |
355 |
2.82 |
18 |
49 |
0.06 |
SQL*Net break/reset to client |
333,084 |
0.00 |
16 |
0 |
59.50 |
db file parallel read |
3,732 |
0.00 |
13 |
4 |
0.67 |
gc current multi block request |
175,710 |
0.00 |
10 |
0 |
31.39 |
control file sequential read |
15,974 |
0.00 |
10 |
1 |
2.85 |
direct path read temp |
1,873 |
0.00 |
9 |
5 |
0.33 |
gc cr multi block request |
20,877 |
0.00 |
8 |
0 |
3.73 |
log file sync |
919 |
0.00 |
4 |
4 |
0.16 |
gc cr block busy |
526 |
0.00 |
3 |
6 |
0.09 |
enq: FB - contention |
10,384 |
0.00 |
3 |
0 |
1.85 |
DFS lock handle |
3,517 |
0.00 |
3 |
1 |
0.63 |
control file parallel write |
1,946 |
0.00 |
3 |
1 |
0.35 |
gc current block 2-way |
4,165 |
0.00 |
2 |
0 |
0.74 |
library cache lock |
432 |
0.00 |
2 |
4 |
0.08 |
name-service call wait |
22 |
0.00 |
2 |
76 |
0.00 |
row cache lock |
3,894 |
0.00 |
2 |
0 |
0.70 |
gcs log flush sync |
1,259 |
42.02 |
2 |
1 |
0.22 |
os thread startup |
18 |
5.56 |
2 |
89 |
0.00 |
gc cr block 2-way |
3,671 |
0.00 |
2 |
0 |
0.66 |
gc current block busy |
113 |
0.00 |
1 |
12 |
0.02 |
SQL*Net message to client |
1,544,115 |
0.00 |
1 |
0 |
275.83 |
gc buffer busy |
15 |
6.67 |
1 |
70 |
0.00 |
gc cr disk read |
3,272 |
0.00 |
1 |
0 |
0.58 |
direct path write temp |
159 |
0.00 |
1 |
5 |
0.03 |
gc current grant busy |
898 |
0.00 |
1 |
1 |
0.16 |
log file switch completion |
29 |
0.00 |
1 |
17 |
0.01 |
CGS wait for IPC msg |
48,739 |
99.87 |
0 |
0 |
8.71 |
gc current grant 2-way |
1,142 |
0.00 |
0 |
0 |
0.20 |
kjbdrmcvtq lmon drm quiesce: ping completion |
9 |
0.00 |
0 |
19 |
0.00 |
enq: US - contention |
567 |
0.00 |
0 |
0 |
0.10 |
direct path read |
138 |
0.00 |
0 |
1 |
0.02 |
enq: WF - contention |
14 |
0.00 |
0 |
9 |
0.00 |
ksxr poll remote instances |
13,291 |
58.45 |
0 |
0 |
2.37 |
library cache pin |
211 |
0.00 |
0 |
1 |
0.04 |
ges global resource directory to be frozen |
9 |
100.00 |
0 |
10 |
0.00 |
wait for scn ack |
583 |
0.00 |
0 |
0 |
0.10 |
log file sequential read |
36 |
0.00 |
0 |
2 |
0.01 |
undo segment extension |
25,342 |
99.79 |
0 |
0 |
4.53 |
rdbms ipc reply |
279 |
0.00 |
0 |
0 |
0.05 |
ktfbtgex |
6 |
100.00 |
0 |
10 |
0.00 |
enq: HW - contention |
44 |
0.00 |
0 |
1 |
0.01 |
gc cr grant 2-way |
158 |
0.00 |
0 |
0 |
0.03 |
enq: TX - index contention |
1 |
0.00 |
0 |
34 |
0.00 |
enq: CF - contention |
64 |
0.00 |
0 |
1 |
0.01 |
PX Deq: Signal ACK |
37 |
21.62 |
0 |
1 |
0.01 |
latch free |
3 |
0.00 |
0 |
10 |
0.00 |
buffer busy waits |
625 |
0.16 |
0 |
0 |
0.11 |
KJC: Wait for msg sends to complete |
154 |
0.00 |
0 |
0 |
0.03 |
log buffer space |
11 |
0.00 |
0 |
2 |
0.00 |
enq: PS - contention |
46 |
0.00 |
0 |
1 |
0.01 |
enq: TM - contention |
70 |
0.00 |
0 |
0 |
0.01 |
IPC send completion sync |
40 |
100.00 |
0 |
0 |
0.01 |
PX Deq: reap credit |
1,544 |
99.81 |
0 |
0 |
0.28 |
log file single write |
36 |
0.00 |
0 |
0 |
0.01 |
enq: TT - contention |
46 |
0.00 |
0 |
0 |
0.01 |
enq: TD - KTF dump entries |
12 |
0.00 |
0 |
1 |
0.00 |
read by other session |
1 |
0.00 |
0 |
12 |
0.00 |
LGWR wait for redo copy |
540 |
0.00 |
0 |
0 |
0.10 |
PX Deq Credit: send blkd |
17 |
5.88 |
0 |
0 |
0.00 |
enq: TA - contention |
14 |
0.00 |
0 |
0 |
0.00 |
latch: ges resource hash list |
44 |
0.00 |
0 |
0 |
0.01 |
enq: PI - contention |
8 |
0.00 |
0 |
0 |
0.00 |
write complete waits |
1 |
0.00 |
0 |
2 |
0.00 |
enq: DR - contention |
3 |
0.00 |
0 |
0 |
0.00 |
enq: MW - contention |
3 |
0.00 |
0 |
0 |
0.00 |
enq: TS - contention |
3 |
0.00 |
0 |
0 |
0.00 |
PX qref latch |
150 |
100.00 |
0 |
0 |
0.03 |
enq: MD - contention |
2 |
0.00 |
0 |
0 |
0.00 |
latch: KCL gc element parent latch |
11 |
0.00 |
0 |
0 |
0.00 |
enq: JS - job run lock - synchronize |
1 |
0.00 |
0 |
1 |
0.00 |
SQL*Net more data to client |
16 |
0.00 |
0 |
0 |
0.00 |
latch: cache buffers lru chain |
1 |
0.00 |
0 |
0 |
0.00 |
enq: UL - contention |
1 |
0.00 |
0 |
0 |
0.00 |
gc current split |
1 |
0.00 |
0 |
0 |
0.00 |
enq: AF - task serialization |
1 |
0.00 |
0 |
0 |
0.00 |
latch: object queue header operation |
3 |
0.00 |
0 |
0 |
0.00 |
latch: cache buffers chains |
1 |
0.00 |
0 |
0 |
0.00 |
latch: enqueue hash chains |
2 |
0.00 |
0 |
0 |
0.00 |
SQL*Net message from client |
1,544,113 |
0.00 |
12,626 |
8 |
275.83 |
gcs remote message |
634,884 |
98.64 |
9,203 |
14 |
113.41 |
DIAG idle wait |
23,628 |
0.00 |
4,616 |
195 |
4.22 |
ges remote message |
149,591 |
93.45 |
4,612 |
31 |
26.72 |
Streams AQ: qmn slave idle wait |
167 |
0.00 |
4,611 |
27611 |
0.03 |
Streams AQ: qmn coordinator idle wait |
351 |
47.86 |
4,611 |
13137 |
0.06 |
Streams AQ: waiting for messages in the queue |
488 |
100.00 |
4,605 |
9436 |
0.09 |
virtual circuit status |
157 |
100.00 |
4,596 |
29272 |
0.03 |
PX Idle Wait |
1,072 |
97.11 |
2,581 |
2407 |
0.19 |
jobq slave wait |
145 |
97.93 |
420 |
2896 |
0.03 |
Streams AQ: waiting for time management or cleanup tasks |
1 |
100.00 |
270 |
269747 |
0.00 |
PX Deq: Parse Reply |
40 |
40.00 |
0 |
3 |
0.01 |
PX Deq: Execution Msg |
121 |
26.45 |
0 |
0 |
0.02 |
PX Deq: Join ACK |
38 |
42.11 |
0 |
1 |
0.01 |
PX Deq: Execute Reply |
34 |
32.35 |
0 |
0 |
0.01 |
PX Deq: Msg Fragment |
16 |
0.00 |
0 |
0 |
0.00 |
Streams AQ: RAC qmn coordinator idle wait |
351 |
100.00 |
0 |
0 |
0.06 |
class slave wait |
2 |
0.00 |
0 |
0 |
0.00 |
db file scattered read等待事件是当SESSION等待multi-block I/O时发生的,经过是因为full table scans或 index fast full scans。发生过多读操做的Segments能够在“Segments by Physical Reads”和 “SQL ordered by Reads”节中识别(在其它版本的报告中,多是别的名称)。若是在OLTP应用中,不该该有过多的全扫描操做,而应使用选择性好的索引操做。
DB file sequential read等待意味着发生顺序I/O读等待(一般是单块读取到连续的内存区域中),若是这个等待很是严重,应该使用上一段的方法肯定执行读操做的热点SEGMENT,而后经过对大表进行分区以减小I/O量,或者优化执行计划(经过使用存储大纲或执行数据分析)以免单块读操做引发的sequential read等待。经过在批量应用中,DB file sequential read是很影响性能的事件,老是应当设法避免。
Log File Parallel Write事件是在等待LGWR进程将REDO记录从LOG 缓冲区写到联机日志文件时发生的。虽然写操做多是并发的,但LGWR须要等待最后的I/O写到磁盘上才能认为并行写的完成,所以等待时间依赖于OS完成全部请求的时间。若是这个等待比较严重,能够经过将LOG文件移到更快的磁盘上或者条带化磁盘(减小争用)而下降这个等待。
Buffer Busy Waits事件是在一个SESSION须要访问BUFFER CACHE中的一个数据库块而又不能访问时发生的。缓冲区“busy”的两个缘由是:1)另外一个SESSION正在将数据块读进BUFFER。2)另外一个SESSION正在以排它模式占用着这块被请求的BUFFER。能够在“Segments by Buffer Busy Waits”一节中找出发生这种等待的SEGMENT,而后经过使用reverse-key indexes并对热表进行分区而减小这种等待事件。
Log File Sync事件,当用户SESSION执行事务操做(COMMIT或ROLLBACK等)后,会通知 LGWR进程将所须要的全部REDO信息从LOG BUFFER写到LOG文件,在用户SESSION等待LGWR返回安全写入磁盘的通知时发生此等待。减小此等待的方法写Log File Parallel Write事件的处理。
Enqueue Waits是串行访问本地资源的本锁,代表正在等待一个被其它SESSION(一个或多个)以排它模式锁住的资源。减小这种等待的方法依赖于生产等待的锁类型。致使Enqueue等待的主要锁类型有三种:TX(事务锁), TM D(ML锁)和ST(空间管理锁)。
Back to Wait Events Statistics
Back to Top
Event |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
log file parallel write |
5,497 |
0.00 |
47 |
9 |
0.98 |
db file parallel write |
4,806 |
0.00 |
34 |
7 |
0.86 |
events in waitclass Other |
69,002 |
83.25 |
22 |
0 |
12.33 |
control file sequential read |
9,323 |
0.00 |
7 |
1 |
1.67 |
control file parallel write |
1,946 |
0.00 |
3 |
1 |
0.35 |
os thread startup |
18 |
5.56 |
2 |
89 |
0.00 |
direct path read |
138 |
0.00 |
0 |
1 |
0.02 |
db file sequential read |
21 |
0.00 |
0 |
5 |
0.00 |
direct path write |
138 |
0.00 |
0 |
0 |
0.02 |
log file sequential read |
36 |
0.00 |
0 |
2 |
0.01 |
gc cr block 2-way |
96 |
0.00 |
0 |
0 |
0.02 |
gc current block 2-way |
78 |
0.00 |
0 |
0 |
0.01 |
log buffer space |
11 |
0.00 |
0 |
2 |
0.00 |
row cache lock |
59 |
0.00 |
0 |
0 |
0.01 |
log file single write |
36 |
0.00 |
0 |
0 |
0.01 |
buffer busy waits |
151 |
0.66 |
0 |
0 |
0.03 |
gc current grant busy |
29 |
0.00 |
0 |
0 |
0.01 |
library cache lock |
4 |
0.00 |
0 |
1 |
0.00 |
enq: TM - contention |
10 |
0.00 |
0 |
0 |
0.00 |
gc current grant 2-way |
8 |
0.00 |
0 |
0 |
0.00 |
gc cr multi block request |
7 |
0.00 |
0 |
0 |
0.00 |
gc cr grant 2-way |
5 |
0.00 |
0 |
0 |
0.00 |
rdbms ipc message |
97,288 |
73.77 |
50,194 |
516 |
17.38 |
gcs remote message |
634,886 |
98.64 |
9,203 |
14 |
113.41 |
DIAG idle wait |
23,628 |
0.00 |
4,616 |
195 |
4.22 |
pmon timer |
1,621 |
100.00 |
4,615 |
2847 |
0.29 |
ges remote message |
149,591 |
93.45 |
4,612 |
31 |
26.72 |
Streams AQ: qmn slave idle wait |
167 |
0.00 |
4,611 |
27611 |
0.03 |
Streams AQ: qmn coordinator idle wait |
351 |
47.86 |
4,611 |
13137 |
0.06 |
smon timer |
277 |
6.50 |
4,531 |
16356 |
0.05 |
Streams AQ: waiting for time management or cleanup tasks |
1 |
100.00 |
270 |
269747 |
0.00 |
PX Deq: Parse Reply |
40 |
40.00 |
0 |
3 |
0.01 |
PX Deq: Join ACK |
38 |
42.11 |
0 |
1 |
0.01 |
PX Deq: Execute Reply |
34 |
32.35 |
0 |
0 |
0.01 |
Streams AQ: RAC qmn coordinator idle wait |
351 |
100.00 |
0 |
0 |
0.06 |
Back to Wait Events Statistics
Back to Top
Statistic |
Total |
NUM_LCPUS |
0 |
NUM_VCPUS |
0 |
AVG_BUSY_TIME |
101,442 |
AVG_IDLE_TIME |
371,241 |
AVG_IOWAIT_TIME |
5,460 |
AVG_SYS_TIME |
25,795 |
AVG_USER_TIME |
75,510 |
BUSY_TIME |
812,644 |
IDLE_TIME |
2,971,077 |
IOWAIT_TIME |
44,794 |
SYS_TIME |
207,429 |
USER_TIME |
605,215 |
LOAD |
0 |
OS_CPU_WAIT_TIME |
854,100 |
RSRC_MGR_CPU_WAIT_TIME |
0 |
PHYSICAL_MEMORY_BYTES |
8,589,934,592 |
NUM_CPUS |
8 |
NUM_CPU_CORES |
4 |
NUM_LCPUS:
NUM_VCPUS:
AVG_BUSY_TIME:
AVG_IDLE_TIME:
AVG_IOWAIT_TIME:
AVG_SYS_TIME:
AVG_USER_TIME:
BUSY_TIME:
IDLE_TIME:
IOWAIT_TIME:
SYS_TIME:
USER_TIME:
LOAD:
OS_CPU_WAIT_TIME:
RSRC_MGR_CPU_WAIT_TIME:
PHYSICAL_MEMORY_BYTES:
NUM_CPUS:
NUM_CPU_CORES:
总的elapsed time也能够用以公式计算:
BUSY_TIME + IDLE_TIME + IOWAIT TIME
或:SYS_TIME + USER_TIME + IDLE_TIME + IOWAIT_TIME
Back to Wait Events Statistics
Back to Top
Service Name |
DB Time (s) |
DB CPU (s) |
Physical Reads |
Logical Reads |
ICCI |
608.10 |
496.60 |
315,849 |
16,550,972 |
SYS$USERS |
54.70 |
17.80 |
6,539 |
58,929 |
ICCIXDB |
0.00 |
0.00 |
0 |
0 |
SYS$BACKGROUND |
0.00 |
0.00 |
282 |
38,990 |
Back to Wait Events Statistics
Back to Top
Service Name |
User I/O Total Wts |
User I/O Wt Time |
Concurcy Total Wts |
Concurcy Wt Time |
Admin Total Wts |
Admin Wt Time |
Network Total Wts |
Network Wt Time |
ICCI |
59826 |
8640 |
4621 |
338 |
0 |
0 |
1564059 |
6552 |
SYS$USERS |
6567 |
3238 |
231 |
11 |
0 |
0 |
7323 |
3 |
SYS$BACKGROUND |
443 |
115 |
330 |
168 |
0 |
0 |
0 |
0 |
Back to Wait Events Statistics
Back to Top
本节按各类资源分别列出对资源消耗最严重的SQL语句,并显示它们所占统计期内所有资源的比例,这给出咱们调优指南。例如在一个系统中,CPU资源是系统性能瓶颈所在,那么优化buffer gets最多的SQL语句将得到最大效果。在一个I/O等待是最严重事件的系统中,调优的目标应该是physical IOs最多的SQL语句。
在STATSPACK报告中,没有完整的SQL语句,可以使用报告中的Hash Value经过下面语句从数据库中查到:
select sql_text
from stats$sqltext
where hash_value = &hash_value
order by piece;
Elapsed Time (s) |
CPU Time (s) |
Executions |
Elap per Exec (s) |
% Total DB Time |
SQL Id |
SQL Module |
SQL Text |
93 |
57 |
1 |
93.50 |
14.10 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
76 |
75 |
172,329 |
0.00 |
11.52 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
58 |
42 |
1 |
58.04 |
8.75 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
51 |
42 |
1 |
50.93 |
7.68 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
38 |
36 |
166,069 |
0.00 |
5.67 |
|
select c.name, u.name from co... |
|
35 |
3 |
1 |
35.00 |
5.28 |
SQL*Plus |
SELECT F.TABLESPACE_NAME, TO_... |
|
23 |
23 |
172,329 |
0.00 |
3.46 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
15 |
11 |
5 |
2.98 |
2.25 |
|
DECLARE job BINARY_INTEGER := ... |
|
14 |
14 |
172,983 |
0.00 |
2.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
13 |
13 |
172,337 |
0.00 |
2.00 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
13 |
13 |
166,051 |
0.00 |
1.89 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
10 |
4 |
1 |
9.70 |
1.46 |
cuidmain@HPGICCI1 (TNS V1-V3) |
select CUID_CUST_NO , CUID_ID_... |
|
10 |
8 |
5 |
1.91 |
1.44 |
|
INSERT INTO STATS$SGA_TARGET_A... |
|
8 |
8 |
172,329 |
0.00 |
1.25 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCICCS set CCSMAXOVER... |
|
8 |
8 |
172,983 |
0.00 |
1.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
Back to SQL Statistics
Back to Top
CPU Time (s) |
Elapsed Time (s) |
Executions |
CPU per Exec (s) |
% Total DB Time |
SQL Id |
SQL Module |
SQL Text |
75 |
76 |
172,329 |
0.00 |
11.52 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
57 |
93 |
1 |
57.31 |
14.10 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
42 |
51 |
1 |
42.43 |
7.68 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
42 |
58 |
1 |
42.01 |
8.75 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
36 |
38 |
166,069 |
0.00 |
5.67 |
|
select c.name, u.name from co... |
|
23 |
23 |
172,329 |
0.00 |
3.46 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
14 |
14 |
172,983 |
0.00 |