Oracle® Enterprise Manager Oracle Database and Database-Related Metric Reference Manual 10g Release 2 (10.2) Part Number B25986-01 |
|
|
View PDF |
The Oracle database metrics provide for each metric the following information:
Description
Metric summary. The metric summary can include some or all of the following: target version, evaluation frequency, collection frequency, upload frequency, operator, default warning threshold, default critical threshold, consecutive number of occurrences preceding notification, and alert text.
Multiple Thresholds (where applicable)
Data source
User action
This metric category contains the metrics that are used in creating the alert log, for example, data block corruption, terminated session, and so on.
This metric is the name of the trace file (if any) associated with the logged error. For all target versions, the collection frequency for this metric is every 15 minutes.
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
This metric is the name of the alert log file. For all target versions, the collection frequency for this metric is every 15 minutes
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
This metric signifies that the archiver of the database being monitored has been temporarily suspended since the last sample time.
If the database is running in ARCHIVELOG mode, an alert is displayed when archiving is hung (ORA-00257) messages are written to the ALERT file. The ALERT file is a special trace file containing a chronological log of messages and errors.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-1 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
CONTAINS |
Not Defined |
ORA- |
1* |
The archiver hung at time/line number: %timeLine%. |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time/Line Number" object.
If warning or critical threshold values are currently set for any "Time/Line Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time/Line Number" object, use the Edit Thresholds page.
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
User Action
Examine ALERT log and archiver trace file for additional information; however, the most likely cause of this message is that the destination device is out of space to store the redo log file. Verify the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.
This metric signifies that the database being monitored has generated a corrupted block error to the ALERT file since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when data block corrupted messages are written to the ALERT file.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-2 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
CONTAINS |
Not Defined |
ORA- |
1* |
A data block was corrupted at time/line number: %timeLine%. |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time/Line Number" object.
If warning or critical threshold values are currently set for any "Time/Line Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time/Line Number" object, use the Edit Thresholds page.
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
User Action
Examine ALERT log for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.
This metric signifies that the database being monitored has generated errors to the ALERT log file since the last sample time. The ALERT log file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when Oracle Exception (ORA-006xx) messages are written to the ALERT log file. A warning is displayed when other ORA messages are written to the ALERT log file.
Deadlock detected (ORA-00060), archiver hung (ORA-00257), and data block corrupted (ORA-01578) messages are sent out as separate metrics.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-3 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
MATCH |
ORA-0*(600?|7445|4[0-9][0-9][0-9])[^0-9] |
Not Defined |
1* |
ORA-error stack (%errCodes%) logged in %alertLogName%. |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time/Line Number" object.
If warning or critical threshold values are currently set for any "Time/Line Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time/Line Number" object, use the Edit Thresholds page.
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
User Action
Examine ALERT log for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-4 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
CONTAINS |
Not Defined |
ORA- |
1* |
Media failure was detected at time/line number: %timeLine%. |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time/Line Number" object.
If warning or critical threshold values are currently set for any "Time/Line Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time/Line Number" object, use the Edit Thresholds page.
This metric signifies that a session terminated unexpectedly since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert is displayed when session unexpectedly terminated (ORA-00603) messages are written to the ALERT file.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-5 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
CONTAINS |
ORA- |
Not Defined |
1* |
A session was terminated at time/line number: %timeLine%. |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time/Line Number" object.
If warning or critical threshold values are currently set for any "Time/Line Number" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time/Line Number" object, use the Edit Thresholds page.
Data Source
$ORACLE_HOME/sysman/admin/scripts/alertlog.pl where $ORACLE_HOME refers to the home of the Oracle Management Agent.
User Action
Examine the ALERT log and the session trace file for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.
This metric category places all the types of alert log errors into four categories: Archiver Hung, Data Block Corruption, Session Terminated, and Generic. The metrics in this category represent whether the last scan of the alert log identified any of the aforementioned categories of error and, if so, how many.
This metric reflects the number of Archiver Hung alert log errors witnessed the last time Enterprise Manager scanned the Alert Log.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-6 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Archiver hung errors have been found in the alert log. |
Data Source
Alert Log metric
User Action
Examine the Alert Log.
This metric reflects the number of Data Block Corruption alert log errors witnessed the last time Enterprise Manager scanned the Alert Log.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-7 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Data block corruption errors have been found in the alert log. |
Data Source
Alert Log metric
User Action
Examine the Alert Log.
This metric reflects the number of Generic alert log errors witnessed the last time Enterprise Manager scanned the Alert Log.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-8 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
%value% distinct types of ORA- errors have been found in the alert log. |
Data Source
Alert Log metric
User Action
Examine the Alert Log.
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-9 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Media failure errors have been found in the alert log. |
This metric reflects the number of Session Terminated alert log errors witnessed the last time Enterprise Manager scanned the Alert Log.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-10 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Session terminations have been found in the alert log. |
Data Source
Alert Log metric
User Action
Examine the Alert Log.
This metric category contains the metrics representing the utilization of the various archive areas.
If the database is running in ARCHIVELOG mode, this metric checks for available redo log destination device. It returns the percentage of used space of the redo log destination.
The Archive Full (%) metric returns the percentage of space used on the archive area destination. If the space used is more than the threshold value given in the threshold arguments, then a warning or critical alert is generated.
If the database is not running in ARCHIVELOG mode or all archive destinations are standby databases for Oracle8i, this metric fails to register.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-11 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
80 |
Not Defined |
1 |
%value%%% of archive area %archDir% is used. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Archive Area Destination" object.
If warning or critical threshold values are currently set for any "Archive Area Destination" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Archive Area Destination" object, use the Edit Thresholds page.
Data Source
If no quota is set for archive area, the percentage is calculated using the UNIX df -k
command.
If quota is set:
archive area used (%) = (total area used / total archive area) * 100
User Action
Verify the device specified in the initialization parameter LOG_ARCHIVE_DEST is set up properly for archiving.
For Oracle8, verify that the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters are set up properly for archiving.
For Oracle8i, there are two methods you can use to specify archive destinations. The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 5) to specify from one to five different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination, for example, LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2, and so on. The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST parameter to determine an optional secondary location.
If the LOG_ARCHIVE_DEST initialization parameter is set up correctly and this metric triggers, then free up more space in the destination specified by the archive destination parameters.
This metric represents the total space used (in KB) on the device containing the archive destination directory. For all target versions, the collection frequency for this metric is every 15 minutes
Data Source
If no quota is set for archive area, this is calculated through the UNIX df -k
command.
total area used = quota_used * db_block_size (in KB)
User Action
Verify the device specified in the initialization parameter LOG_ARCHIVE_DEST is set up properly for archiving.
For Oracle8, verify that the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters are set up properly for archiving.
For Oracle8i, there are two methods you can use to specify archive destinations. The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 5) to specify from one to five different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination, for example, LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2, and so on. The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST parameter to determine an optional secondary location.
If the LOG_ARCHIVE_DEST initialization parameter is set up correctly and this metric triggers, then free up more space in the destination specified by the archive destination parameters.
When running a database in ARCHIVELOG mode, the archiving of the online redo log is enabled. Filled groups of the online redo log are archived, by default, to the destination specified by the LOG_ARCHIVE_DEST initialization parameter. If this destination device becomes full, the database operation is temporarily suspended until disk space is available.
If the database is running in ARCHIVELOG mode, this metric checks for available redo log destination devices.
If the database is not running in ARCHIVELOG mode, or all archive destinations are standby databases for Oracle8i, this metric fails to register.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-12 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
1 |
Archive area %archDir% has %value% free KB remaining. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Archive Area Destination" object.
If warning or critical threshold values are currently set for any "Archive Area Destination" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Archive Area Destination" object, use the Edit Thresholds page.
Data Source
If the database is in NOARCHIVELOG mode, then nothing is collected.
If the database is in ARCHIVELOG mode, log_archive_destination from v$parameter is queried to obtain the current list of archivelog destinations. The results are obtained by directly checking the disk usage (df -kl).
User Action
Verify the device specified in the initialization parameter LOG_ARCHIVE_DEST is set up properly for archiving.
For Oracle8, verify that the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters are set up properly for archiving.
For Oracle8i, there are two methods you can use to specify archive destinations. The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 5) to specify from one to five different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination, for example, LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2, and so on. The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST parameter to determine an optional secondary location.
If the LOG_ARCHIVE_DEST initialization parameter is set up correctly and this metric triggers, then free up more space in the destination specified by the archive destination parameters.
This metric represents the total space (in KB) on the device containing the archive destination directory. For all target versions, the collection frequency for this metric is every 15 minutes
Data Source
If no quota is set for archive area, this is calculated through the UNIX df -k
command.
If quota is set:
total archive area = quota_size * db_block_size (in KB)
User Action
Oracle recommends that multiple archivelog destinations across different disks be configured. When at least one archivelog destination gets full, Oracle recommends the following:
If tape is being used, back up archivelogs to tape and delete the archivelogs.
If tape is not being used, back up the database and remove obsolete files. This also removes archivelogs that are no longer needed based on the database retention policy.
If archivelog destination quota_size is being used, raise the quota_size.
Data Guard metrics
The Data Guard Status metric checks the status of each database in the broker configuration and triggers a warning or critical alert if necessary.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-13 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
CONTAINS |
Warning |
Error |
1 |
The Data Guard status of %dg_name% is %value%. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log that was successfully archived to the standby database. Redo data in all subsequent log files are counted as logs not applied. If the primary database goes down at this point, the redo data from these log files can be applied on the standby database. If there is a gap in the log files received on the standby database, any log files received after the gap cannot be applied.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database and log apply services is currently applying log 1, log apply services can continue to apply up to log 3. Log apply services cannot apply any more log files because log 4 is missing. Even though log files 6, 7, and 9 are received, they cannot be applied and they will not be counted as data not applied.
If all the archived log files on the standby database are continuous, and standby redo logs are used, the standby redo logs are also counted as data not applied, unless real-time apply is turned on and log apply services is already working on the standby redo log files.
If the standby redo logs are multithreaded, the broker computes the highest applied SCN for every thread and totals the numbers. If there are multiple incarnations and the standby database is in a different incarnation from the primary database, each incarnation is computed separately and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-14 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
> |
1 |
3 |
1 |
Standby database %dg_name% has not applied the last %value% received logs. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log that was archived to the standby database. The size of redo data in all subsequent log files are counted as data not applied. If the primary database goes down at this point, redo from these log files can be applied on the standby database. If there is a gap in the log files received on the standby database, any log files received after the gap cannot be applied.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database and log apply services is currently applying log 1, log apply services can continue to apply up to log 3. Log apply services cannot apply any more log files because log 4 is missing. Even though log files 6, 7, and 9 are received, they cannot be applied and they will not be counted as data not applied. In this case, the total size of log files 1, 2, and 3 is the size of Data Not Applied.
If all the archived log files on the standby database are continuous, and standby redo log files are used, the standby redo log files are also counted as data not applied, unless real-time apply is turned on and log apply services is already working on the standby redo log files. The size of an archived log file is its file size. However, the size of a standby redo log is the size of the actual redo in the log and not the file size.
If the standby redo log files are multithreaded, the broker computes the highest applied SCN for every thread and totals the numbers. If there are multiple incarnations and the standby database is in a different incarnation from the primary database, each incarnation is computed separately and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-15 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Standby database %dg_name% has not applied the last %value% megabytes of data received. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log file that was successfully archived to the standby database. Redo data in all subsequent log files, including the current online redo log file, are counted as log files for potential data loss and will be unrecoverable if the primary database goes down at this point.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database, and if log 10 is the current online log file, and if log apply services are currently applying log 1, the last continuous log after the highest applied SCN is log 3. All log files after log 3, that is log files 4 through 10, are counted as data not received. If the primary database goes down at this point, all redo data in log files 4 through 10 are lost on the standby database.
If the primary database is multithreaded (in a RAC database), the broker computes the highest applied SCN for every thread and totals the numbers. If the primary database has multiple incarnations (for example, due to a flashback operation) and the standby database is in a different incarnation from the primary database, the computation is done on each incarnation and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-16 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
> |
1 |
3 |
1 |
Standby database %dg_name% has not received the last %value% logs from the primary database. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log file that was successfully archived to the standby database. The size of redo data in all subsequent log files, including the current online redo log file, are counted as data for potential data loss and will be unrecoverable if the primary database goes down at this point. The size of an archived log file is its file size, and the size of the online redo log file is the size of the actual redo in the online log file, not the file size of the online redo log file.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database, and if log 10 is the current online log file, and if log apply services is currently applying log 1, the last continuous log after the highest applied SCN is log 3. All log files after log 3, that is log files 4 through 10, are counted as data not received and the total size of redo data in these log files is the size of Data Not Received.
If the primary database is multithreaded (in a RAC database), the broker computes the highest applied SCN for every thread and totals the numbers. If the primary database has multiple incarnations (for example, due to a flashback operation) and the standby database is in a different incarnation from the primary database, the computation is done on each incarnation and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-17 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Standby database %dg_name% has not received the last %value% megabytes of data from the primary database. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
Data Guard metrics
Checks the status of each database in the broker configuration.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-18 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.2.0.x |
Every 5 Minutes |
After Every Sample |
CONTAINS |
Warning |
Error |
1 |
The Data Guard status of %dg_name% is %value%. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log that was successfully archived to the standby database. Redo data in all subsequent log files are counted as logs not applied. If the primary database goes down at this point, the redo data from these log files can be applied on the standby database. If there is a gap in the log files received on the standby database, any log files received after the gap cannot be applied.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database and log apply services is currently applying log 1, log apply services can continue to apply up to log 3. Log apply services cannot apply any more log files because log 4 is missing. Even though log files 6, 7, and 9 are received, they cannot be applied and they will not be counted as data not applied.
If all the archived log files on the standby database are continuous, and standby redo logs are used, the standby redo logs are also counted as data not applied, unless real-time apply is turned on and log apply services is already working on the standby redo log files.
If the standby redo logs are multithreaded, the broker computes the highest applied SCN for every thread and totals the numbers. If there are multiple incarnations and the standby database is in a different incarnation from the primary database, each incarnation is computed separately and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-19 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
1 |
3 |
1 |
Standby database %dg_name% has not applied the last %value% received logs. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
The broker computes the highest applied SCN and uses its value to find the last continuous log file that was successfully archived to the standby database. Redo data in all subsequent log files, including the current online redo log file, are counted as log files for potential data loss and will be unrecoverable if the primary database goes down at this point.
For example, if log files 1, 2, 3, 6, 7, and 9 are received on the standby database, and if log 10 is the current online log file, and if log apply services are currently applying log 1, the last continuous log after the highest applied SCN is log 3. All log files after log 3, that is log files 4 through 10, are counted as data not received. If the primary database goes down at this point, all redo data in log files 4 through 10 are lost on the standby database.
If the primary database is multithreaded (in a RAC database), the broker computes the highest applied SCN for every thread and totals the numbers. If the primary database has multiple incarnations (for example, due to a flashback operation) and the standby database is in a different incarnation from the primary database, the computation is done on each incarnation and the results are then totaled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-20 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
1 |
3 |
1 |
Standby database %dg_name% has not received the last %value% logs from the primary database. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
When Fast-Start Failover (FSFO) is enabled, this metric will generate a critical alert on the new primary database (old standby) if an FSFO occurs. The FSFO SCN must be initialized to a value before the metric will alert. This usually takes one collection interval. Once an FSFO occurs and the new primary is ready, the FSFO alert fires. It then clears after one collection interval. A critical alert is configured by default.
Both primary and standby must be configured with sysdba monitoring access.
When Fast-Start Failover (FSFO) is enabled, this metric will generate a critical alert on the new primary database (old standby) if an FSFO occurs. The FSFO SCN must be initialized to a value before the metric will alert. This usually takes one collection interval. Once an FSFO occurs and the new primary is ready, the FSFO alert fires. It then clears after one collection interval. A critical alert is configured by default.
Both primary and standby must be configured with sysdba monitoring access.
Shows the time when a fast-start failover occurred.
The value is 0 if FSFO has not occurred, 1 if FSFO has occurred.
When Fast-Start Failover (FSFO) is enabled, this metric will generate a critical alert on the new primary database (old standby) if an FSFO occurs. The FSFO SCN must be initialized to a value before the metric will alert. This usually takes one collection interval. Once an FSFO occurs and the new primary is ready, the FSFO alert fires. It then clears after one collection interval. A critical alert is configured by default.
Both primary and standby must be configured with sysdba monitoring access.
Any value indicates the metric is ready to trigger.
When Fast-Start Failover (FSFO) is enabled, this metric will generate a critical alert on the new primary database (old standby) if an FSFO occurs. The FSFO SCN must be initialized to a value before the metric will alert. This usually takes one collection interval. Once an FSFO occurs and the new primary is ready, the FSFO alert fires. It then clears after one collection interval. A critical alert is configured by default.
Both primary and standby must be configured with sysdba monitoring access.
A time stamp appears if FSFO occurred.
Data Guard performance metrics
Displays (in seconds) how far the standby is behind the primary.
Data Source
v$dataguard_stats('apply lag')
The approximate number of seconds it would require to failover to this standby database. This accounts for the startup time (if necessary) plus the remaining time it would require to apply all the available redo on the standby. If a bounce is not required, it is only the remaining apply time.
Data Source
v$dataguard_stats ('estimated startup time','apply finish time','standby has been open')
The Data Guard metrics check the status, data not received, and data not applied for the databases in the Data Guard configuration.
For information about Data Guard metrics, see the "Managing Data Guard Metrics" section of the "Data Guard Manager Scenarios" chapter in the Oracle10i Data Guard Broker book.
Use the Data Guard Status metric to check the status of each database in the Data Guard configuration.
By default, a critical and warning threshold value was set for this metric column. Alerts will be generated when threshold values are reached. You can edit the value for a threshold as required.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Name" object.
If warning or critical threshold values are currently set for any "Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Name" object, use the Edit Thresholds page.
User Action
Perform the following steps:
Check the Edit Properties General page for the primary and standby databases for detailed information.
Examine the database alert logs and the Data Guard broker logs for additional information.
This metric category contains the database file metrics.
This metric represents the average file read time, measured in hundredths of a second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-21 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every 10 Minutes |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "File Name" object.
If warning or critical threshold values are currently set for any "File Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File Name" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the average file write time, measured in hundredths of a second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-22 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every 10 Minutes |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "File Name" object.
If warning or critical threshold values are currently set for any "File Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "File Name" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics that represent the health of database jobs registered through the DBMS_JOB interface.
The Oracle Server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using the Oracle job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.
A job can be broken in two ways:
Oracle has failed to successfully execute the job after sixteen attempts. The job has been explicitly marked as broken by using the procedure DBMS_ JOB.BROKEN.
This metric checks for broken DBMS jobs. A critical alert is generated if the number of broken jobs exceeds the value specified by the threshold argument.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-23 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Uploaded |
> |
0 |
Not Defined |
1 |
%value% job(s) are broken. |
Data Source
SELECT COUNT(*) FROM dba_jobs WHERE broken [less] [greater] 'N'
User Action
Check the ALERT log and trace files for error information. Correct the problem that is preventing the job from running. Force immediate re-execution of the job by calling DBMS_JOB.RUN.
The Oracle Server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using the Oracle job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.
If a job returns an error while Oracle is attempting to execute it, the job fails. Oracle repeatedly tries to execute the job doubling the interval of each attempt. If the job fails sixteen times, Oracle automatically marks the job as broken and no longer tries to execute it.
This metric checks for failed DBMS jobs. An alert is generated if the number of failed job exceeds the value specified by the threshold argument.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-24 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Uploaded |
> |
0 |
Not Defined |
1 |
%value% job(s) have failed. |
Data Source
SELECT COUNT(*) FROM dba_jobs WHERE NVL(failures, 0) [less] [greater] 0
User Action
Check the ALERT log and trace files for error information. Correct the problem that is preventing the job from running.
This metric category contains the metrics that represent the percentage of resource limitations at which the Oracle Server is operating.
This metric represents the current number of logons.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-25 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
Generated By Database Server |
Data Source
logons current
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the current number of opened cursors.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-26 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
1200 |
Not Defined |
3 |
Generated By Database Server |
Data Source
opened cursors current
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
The DML_LOCKS initialization parameter specifies the maximum number of DML locks. The purpose of DML locks is to guarantee the integrity of data being accessed concurrently by multiple users. DML locks prevent destructive interference of simultaneous conflicting DML and/or DDL operations.
This metric checks for the utilization of the lock resource against the values (percentage) specified by the threshold arguments. If the percentage of all active DML locks to the limit set in the DML_LOCKS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.
If DML_LOCKS is 0, this test fails to register. A value of 0 indicates that enqueues are disabled.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-27 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
> |
80 |
Not Defined |
3 |
%target% has reached %value%%% of the lock limit. |
Data Source
SELECT resource_name name, 100*DECODE(initial_allocation, ' UNLIMITED', 0, current_utilization / initial_allocation) usage FROM v$resource_limit WHERE LTRIM(limit_value) != '0' AND LTRIM(initial_allocation) != '0' AND resource_name = 'dml_locks'
User Action
Increase the DML_LOCKS instance parameter by 10%.
The PROCESSES initialization parameter specifies the maximum number of operating system user processes that can simultaneously connect to a database at the same time. This number also includes background processes utilized by the instance.
This metric checks for the utilization of the process resource against the values (percentage) specified by the threshold arguments. If the percentage of all current processes to the limit set in the PROCESSES initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-28 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
%target% has reached %value%%% of the process limit. |
Table 4-29 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
Generated By Database Server |
Data Source
SELECT resource_name name, 100*DECODE(initial_allocation, ' UNLIMITED', 0, current_utilization) != '0' AND resource_name = 'processes'
User Action
Verify that the current PROCESSES instance parameter setting has not exceeded the operating system-dependent maximum. Increase the number of processes to be at least 6 + the maximum number of concurrent users expected to log in to the instance.
The SESSIONS initialization parameter specifies the maximum number of concurrent connections that the database will allow.
This metric checks for the utilization of the session resource against the values (percentage) specified by the threshold arguments. If the percentage of the number of sessions, including background processes, to the limit set in the SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-30 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
> |
90 |
97 |
3 |
%target% has reached %value%%% of the session limit. |
Table 4-31 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
90 |
97 |
3 |
Generated By Database Server |
Data Source
SELECT resource_name name, 100*DECODE(initial_allocation, ' UNLIMITED', 0, current_utilization) != '0' AND resource_name = 'sessions'
User Action
Increase the SESSIONS instance parameter. For XA environments, confirm that SESSIONS is at least 2.73 * PROCESSES. For shared server environments, confirm that SESSIONS is at least 1.1 * maximum number of connections.
The LICENSE_MAX_SESSIONS initialization parameter specifies the maximum number of concurrent user sessions allowed simultaneously.
This metric checks whether the number of users logged on is reaching the license limit. If the percentage of the number of concurrent user sessions to the limit set in the LICENSE_MAX_SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated. If LICENSE_MAX_SESSIONS is not explicitly set to a value, the test does not trigger.
Note: This metric is most useful when session licensing is enabled. Refer to the Oracle Server Reference Manual for more information on LICENSE_MAX_SESSIONS and LICENSE_MAX_USERS.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-32 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
%target% has reached %value%%% of the user limit. |
Table 4-33 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
Generated By Database Server |
Data Source
SELECT 'user' name, 100*DECODE(session_max, 0, 0, sessions_current/session_max) usage FROM v$license
User Action
This typically indicates that the license limit for the database has been reached. The user will need to acquire additional licenses, then increase LICENSE_MAX_ SESSIONS to reflect the new value.
This metric category contains the database services metrics.
This metric represents the average CPU time, in microseconds, for calls to a particular database service.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-34 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Service Name" object.
If warning or critical threshold values are currently set for any "Service Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Service Name" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the average elapsed time, in microseconds, for calls to a particular database service.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-35 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Service Name" object.
If warning or critical threshold values are currently set for any "Service Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Service Name" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics associated with this distributed database's deferred transactions.
Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table.
This metric checks for the number of deferred transactions. An alert is generated if the number of deferred transactions exceeds the value specified by the threshold argument.
Data Source
SELECT count(*) FROM sys.deftran
User Action
When the advanced replication facility pushes a deferred transaction to a remote site, it uses a distributed transaction to ensure that the transaction has been properly committed at the remote site before the transaction is removed for the queue at the local site. If transactions are not being pushed to a given remote site, verify that the destination for the transaction was correctly specified. If you specify a destination database when calling DBMS_DEFER_SYS.SCHEDULE_EXECUTION using the DBLINK parameter or DBMS_DEFER_SYS.EXECUTE using the DESTINATION parameter, make sure the full database link is provided.
Wrong view destinations can lead to erroneous deferred transaction behavior. Verify the DEFCALLEST and DEFTRANDEST views are the definitions from the CATREPC.SQL not the ones from CATDEFER.SQL.
Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table. If a transaction is not successfully propagated to the remote site, Oracle rolls back the transaction, logs the transaction in the SYS.DEFERROR view in the remote destination database.
This metric checks for the number of transactions in SYS.DEFERROR view and raises an alert if it exceeds the value specified by the threshold argument.
Data Source
SELECT count(*) FROM sys.deferror
User Action
An error in applying a deferred transaction may be the result of a database problem, such as a lack of available space in the table is to be updated or may be the result of an unresolved insert, update or delete conflict. The SYS.DEFERROR view provides the ID of the transaction that could not be applied. Use this ID to locate the queued calls associated with the transaction. These calls are stored in the SYS.DEFCALL view. You can use the procedures in the DBMS_DEFER_QUERY package to determine the arguments to the procedures listed in the SYS.DEFCALL view.
The metrics in this metric category check for the percentage of used space of the dump destination devices.
This metric is the directory represented by this metric index's dump destination.
Each server and background process can write to an associated trace file to log messages and errors.
Background processes and the ALERT file are written to the destination specified by BACKGROUND_DUMP_DEST. Trace files for server processes are written to the destination specified by USER_ DUMP_DEST.
For all target versions, the collection frequency for this metric is every 15 minutes.
Data Source
data from v$parameter
User Action
Verify the device specified in the initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST are set up properly for archiving.
If the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this metric triggers, then free up more space in the destination specified by the dump destination parameters.
This metric returns the percentage of used space of the dump area destinations.
If the space used is more than the threshold value given in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-36 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
95 |
Not Defined |
1 |
%value%%% of %dumpType% dump area is used. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Type of Dump Area" object.
If warning or critical threshold values are currently set for any "Type of Dump Area" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Type of Dump Area" object, use the Edit Thresholds page.
Data Source
Calculated using the UNIX df -k
command.
Critical threshold: Percentage of free space threshold for critical alert.
Warning threshold: Percentage of free space threshold for warning alert.
User Action
Verify the device specified in the initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST are set up properly for archiving.
If the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this metric triggers, then free up more space in the destination specified by the dump destination parameters.
This metric represents the total space used (in KB) on the device containing the dump destination directory.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Data Source
Calculated using the UNIX df -k
command.
User Action
Verify the device specified in the initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST are set up properly for archiving.
If the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this metric triggers, then free up more space in the destination specified by the dump destination parameters.
Each server and background process can write to an associated trace file in order to log messages and errors. Background processes and the ALERT file are written to the destination specified by BACKGROUND_DUMP_DEST.
Trace files for server processes are written to the destination specified by USER_DUMP_DEST.
This metric checks for available free space on these dump destination devices. If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-37 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
< |
2000 |
Not Defined |
1 |
%value% free KB remains in %dumpType% dump area. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Type of Dump Area" object.
If warning or critical threshold values are currently set for any "Type of Dump Area" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Type of Dump Area" object, use the Edit Thresholds page.
Data Source
Calculated using the UNIX df -k
command.
User Action
Verify the device specified in the initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST are set up properly for archiving.
If the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this metric triggers, then free up more space in the destination specified by the dump destination parameters.
This metric represents the total space (in KB) available on the device containing the dump destination directory.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Data Source
Calculated using the UNIX df -k
command.
User Action
Verify the device specified in the initialization parameters BACKGROUND_DUMP_DEST and USER_DUMP_DEST are set up properly for archiving.
If the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this metric triggers, then free up more space in the destination specified by the dump destination parameters.
This metric category contains the metrics that have traditionally been considered to represent the efficiency of some resource. Interpreting the wait interface is generally accepted as a much more accurate approach to measuring efficiency, and is recommended as an alternative to these hit ratios.
This metric represents the data block buffer cache efficiency, as measured by the percentage of times the data block requested by the query is in memory.
Effective use of the buffer cache can greatly reduce the I/O load on the database. If the buffer cache is too small, frequently accessed data will be flushed from the buffer cache too quickly which forces the information to be re-fetched from disk. Since disk access is much slower than memory access, application performance will suffer. In addition, the extra burden imposed on the I/O subsystem could introduce a bottleneck at one or more devices that would further degrade performance.
This test checks the percentage of buffer requests that were already in buffer cache. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-38 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Buffer cache hit ratio is %value%%%. |
Table 4-39 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
((DeltaLogicalGets - (DeltaPhysicalReads - DeltaPhysicalReadsDirect)) / DeltaLogicalGets) * 100 where:
DeltaLogicalGets: difference in 'select value from v$sysstat where name='session logical reads'' between sample end and start
DeltaPhysicalReads: difference in 'select value from v$sysstat where name='physical reads'' between sample end and start
DeltaPhysicalReadsDirect: difference in 'select value from v$sysstat where name='physical reads direct'' between sample end and start (Oracle8i)
User Action
A low buffer cache hit ratio means that the server must often go to disk to retrieve the buffers required to satisfy a query. The queries that perform the most physical reads lower the numerical value of this statistic. Typically queries that perform full table scans force large amounts of buffers into the cache, aging out other buffers that may be required by other queries later. The Top Sessions page sorted by Physical Reads will show the sessions performing the most reads and through further drilldown their associated queries can be identified. Similarly, the Top SQL page sorted by Physical Reads shows which SQL statements are performing the most physical reads. The statements performing the most I/O should be looked at for tuning.
The difference between the two is that the Top Sessions chart shows the sessions that are responsible for the physical reads at any given moment. The Top SQL view shows all SQL that is still in the cache. The top statement may not be executing currently, and thus not responsible for the current poor buffer cache hit ratio.
If the queries seem to be well tuned, the size of the buffer cache also determines how often buffers need to be fetched from disk. The DB_BLOCK_BUFFERS initialization parameter determines the number of database buffers available in the buffer cache. It is one of the primary parameters that contribute to the total memory requirements of the SGA on the instance. The DB_BLOCK_BUFFERS parameter, together with the DB_BLOCK_SIZE parameter, controls the total size of the buffer cache. Since DB_BLOCK_SIZE can only be specified when the database is first created, normally the size of the buffer cache size is controlled using the DB_BLOCK_BUFFERS parameter.
Consider increasing the DB_BLOCK_BUFFERS initialization parameter to increase the size of the buffer cache. This increase allows the Oracle Server to keep more information in memory, thus reducing the number of I/O operations required to do an equivalent amount of work using the current cache size.
This metric represents the CPU usage per second by the database processes, measured in hundredths of a second. A change in the metric value may occur because of a change in either workload mix or workload throughput being performed by the database. Although there is no �correct� value for this metric, it can be used to detect a change in the operation of a system. For example, an increase in Database CPU usage from 500 to 750 indicates that the database is using 50% more CPU. ('No correct value' means that there is no single value that can be applied to any database. The value is a characteristic of the system and the applications running on the system.)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-40 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. ADDM can help to identify database operations that are consuming CPU. ADDM reports are available from a number of locations including the Database Home page and Advisor Central.
This metric represents the average CPU usage per transaction expressed as a number of seconds of CPU time. A change in this metric can occur either because of changing workload on the system, such as the addition of a new module, or because of a change in the way that the workload is performed in the database, such as changes in the plan for a SQL statement. The threshold for this metric should be set based on the actual values observed on your system.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-41 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. ADDM will provide information about which operations are using the CPU resources.
This metric represents the percentage of soft parses satisfied within the session cursor cache.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-42 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
session cursor cache hits / (parse count (total) - parse count (hard))
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents dictionary cache efficiency as measured by the percentage of requests against the dictionary data that were already in memory. It is important to determine whether the misses on the data dictionary are actually affecting the performance of the Oracle Server. The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache, and the other cache structures that are specific to a particular instance configuration.
Misses on the data dictionary cache are to be expected in some cases. Upon instance startup, the data dictionary cache contains no data, so any SQL statement issued is likely to result in cache misses. As more data is read into the cache, the likelihood of cache misses should decrease. Eventually the database should reach a steady state in which the most frequently used dictionary data is in the cache. At this point, very few cache misses should occur. To tune the cache, examine its activity only after your application has been running.
This test checks the percentage of requests against the data dictionary that were found in the Shared Pool. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-43 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Data dictionary hit ratio is %value%%%. |
Table 4-44 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(Gets/Misses) * 100 where:
Misses: select sum(getmisses) from v$rowcache
Gets: select sum(gets) from v$rowcache
User Action
If the percentage of gets is below %90 to %85, consider increasing SHARED_POOL_SIZE to decrease the frequency in which dictionary data is being flushed from the shared pool to make room for new data. To increase the memory available to the cache, increase the value of the initialization parameter SHARED_POOL_SIZE.
This metric represents the percentage of database call time that is spent on the CPU. Although there is no �correct� value for this metric, it can be used to detect a change in the operation of a system, for example, a drop in Database CPU time from 50% to 25%. ('No correct value' means that there is no single value that can be applied to any database. The value is a characteristic of the system and the applications running on the system.)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-45 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
Investigate the change is CPU usage by using Automatic Database Diagnostic Monitor (ADDM). ADDM reports are available from a number of locations including the Database Home page and Advisor Central. Examine the report for increased time spent in wait events.
This metric represents the library cache efficiency, as measured by the percentage of times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are already in memory.
The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.
The shared pool mechanism can greatly reduce system resource consumption in at least three ways: Parse time is avoided if the SQL statement is already in the shared pool.
Application memory overhead is reduced, since all applications use the same pool of shared SQL statements and dictionary resources.
I/O resources are saved, since dictionary elements that are in the shared pool do not require access.
If the shared pool is too small, users will consume additional resources to complete a database operation. For library cache access, the overhead is primarily the additional CPU resources required to re-parse the SQL statement.
This test checks the percentage of parse requests where cursor already in cache If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-46 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Library cache hit ratio is %value%%%. |
Table 4-47 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(DeltaPinHits / DeltaPins) * 100 where:
DeltaPinHits: difference in 'select sum(pinhits) from v$librarycache' between sample end and start
DeltaPins: difference in 'select sum(pins) from v$librarycache' between sample end and start
User Action
The Top Sessions page sorted by Hard Parses lists the sessions incurring the most hard parses. Hard parses occur when the server parses a query and cannot find an exact match for the query in the library cache. You can avoid hard parses by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page can identify the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements that can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
This metric represents the percentage of parse requests where the cursor is not in the cache.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-48 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
1 - pinhits / pins
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-49 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Number of times per second parallel execution was requested and the degree of parallelism was reduced to 25% and more because of insufficient parallel execution servers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-50 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(parallel operations downgraded 25 to 50 percent + parallel operations downgraded 50 to 75 percent + parallel operations downgraded 75 to 99 percent) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per second parallel execution was requested and the degree of parallelism was reduced to 50% and more because of insufficient parallel execution servers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-51 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(+ parallel operations downgraded 50 to 75 percent + parallel operations downgraded 75 to 99 percent) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per second parallel execution was requested and the degree of parallelism was reduced to 75% or more because of insufficient parallel execution servers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-52 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(parallel operations downgraded 75 to 99 percent) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per second parallel execution was requested but execution was serial because of insufficient parallel execution servers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-53 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
parallel operations downgraded to serial / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per transaction parallel execution was requested but execution was serial because of insufficient parallel execution servers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-54 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Not Defined |
Data Source
parallel operations downgraded to serial / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of bytes processed in the PGA versus the total number of bytes processed plus extra bytes read/written in extra passes.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-55 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer.
The redo log buffer efficiency, as measured by the hit ratio, records the percentage of times users did not have to wait for the log writer to free space in the redo log buffer.
This metric monitors the redo log buffer hit ratio (percentage of success) against the values specified by the threshold arguments. If the number of occurrences is smaller than the values specified, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-56 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Redo log allocation hit ratio is %value%%%. |
Table 4-57 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
100 * (redo_entries_delta - redo_space_requests_delta) /redo_entries_delta where:
redo_enties_delta = difference between "SELECT value FROM v$sysstat WHERE name = 'redo entries'" at the beginning and ending of the interval
redo_space_requests_delta = difference between "SELECT value FROM v$sysstat WHERE name = 'redo log space requests'" at the beginning and ending of the interval
User Action
The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.
Consider increasing the LOG_BUFFER initialization parameter in order to increase the size of the redo log buffer. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.
Note: For Oracle Management Agent release 9i, this metric has been obsoleted. It is recommended that you use the Redo NoWait Ratio metric. This metric is kept for backward compatibility with older versions of the Management Agent.
This metric represents the time spent in database operations per transaction. It is derived from the total time that user calls spend in the database (DB time) and the number of commits and rollbacks performed. A change in this value indicates that either the workload has changed or that the database�s ability to process the workload has changed because of either resource constraints or contention.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-58 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page. Changes in the response time per transaction will appear as increased time spent in the database, either on CPU or in wait events and ADDM will report the sources of contention for both hardware and software resources.
This metric represents the percentage of row cache miss ratio.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-59 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the sort efficiency as measured by the percentage of times sorts were performed in memory as opposed to going to disk.
For best performance, most sorts should occur in memory because sorts to disks are less efficient. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.
This test checks the percentage of sorts performed in memory rather than to disk. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-60 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of sorts are performed in memory. |
Table 4-61 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(DeltaMemorySorts / (DeltaDiskSorts + DeltaMemorySorts)) * 100 where:
DeltaMemorySorts: difference in 'select value from v$sysstat where name='sorts (memory)'' between sample end and start
DeltaDiskSorts: difference in 'select value from v$sysstat where name='sorts (disk)'' between sample end and start
User Action
The sessions that are performing the most sorts should be identified such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, and the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions page sorted by Disk Sorts.
Further drilldown into the session performing the most disk sorts with the Current SQL page shows you the SQL statement responsible for the disk sorts.
The Top SQL page sorted by Sorts provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing a sort of current sessions. It allows you to view sort activity via SQL statements and contains cumulative statistics for all executions of that statement.
If excessive sorts are taking place on disk and the queries are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area allows the Oracle Server to maintain sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.
The metric in this metric category checks for the number of failed logins on the target database. This check is performed every ten minutes and returns the number of failed logins for that ten-minute interval. This metric will only work for databases where the audit_trail initialization parameter is set to DB or XML and the session is being audited.
This metric checks for the number of failed logins on the target database. This check is performed every ten minutes and returns the number of failed logins for that ten-minute interval. This metric will only work for databases where the audit_trail initialization parameter is set to DB or XML and the session is being audited.
If the failed login count crosses the values specified in the threshold arguments, then a warning or critical alert is generated. Since it is important to know every time a significant number of failed logins occurs on a system, this metric will fire a new alert for any ten-minute interval where the thresholds are crossed. The user can manually clear these alerts, they will not automatically clear after the next collection.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-62 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 10 Minutes |
After Every Sample |
>= |
10 |
15 |
1* |
There have been %value% Failed Login Attempts |
* Once an alert is triggered for this metric, it must be manually cleared.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Time" object.
If warning or critical threshold values are currently set for any "Time" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Time" object, use the Edit Thresholds page.
Data Source
The database stores login information in different views, based on the audit_trail setting. The database views used are:
DB or DB_EXTENDED: DBA_AUDIT_SESSION
XML (10g Release 2 only): DBA_COMMON_AUDIT_TRAIL
This metric category contains the metrics representing flash recovery.
This metric returns the Flash Recovery Area Location.
Metric Summary 10gR1 or higher Collection every 5 minutes Not evaluated (not alertable)
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 5 Minutes |
Data Source
SELECT value FROM v$parameter WHERE name='db_recovery_file_dest';
User Action
Not available since not alertable.
This metric returns whether or not flashback logging is enabled - YES or NO.
Metric Summary 10gR1 or higher Collection every 5 minutes Not evaluated (not alertable)
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 5 Minutes |
Data Source
SELECT flashback_on FROM v$database;
User Action
Not available since not alertable.
This metric returns the log mode of the database - ARCHIVELOG or NOARCHIVELOG.
Metric Summary 10gR1 or higher Collection every 5 minutes Not evaluated (not alertable)
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 5 Minutes |
Data Source
SELECT log_mode FROM v$database;
User Action
Not available since not alertable.
This metric represents the oldest point-in-time to which you can flashback your database.
Metric Summary 10gR1 or higher Collection every 5 minutes Not evaluated (not alertable)
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 5 Minutes |
Data Source
SELECT to_char(oldest_flashback_time, 'YYYY-MM-DD HH24:MI:SS') FROM v$flashback_database_log;
User Action
Not applicable since not alertable.
This metric represents the percentage of space usable in the flash recovery area. The space usable is composed of the space that is free in addition to the space that is reclaimable.
Metric Summary 10gR2 or higher Collection every 5 minutes Not evaluated (not alertable)
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 5 Minutes |
Data Source
SELECT (100 - sum(percent_space_used)) + sum(percent_space_reclaimable) FROM v$flash_recovery_area_usage;
User Action
Not applicable since not alertable.
This metric category contains the metrics associated with global cache statistics.
This metric represents the average convert time, measured in hundredths of a second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-63 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
0.3 |
0.6 |
1 |
Global cache converts time is %value% cs. |
Data Source
global cache convert time * 10 / global cache converts
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the average time, measured in hundredths of a second, that CR block was received.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-64 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
0.5 |
1 |
1 |
Global cache CR Block request time is %value% cs. |
Table 4-65 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every 3 Samples |
> |
0.5 |
1 |
1 |
Generated By Database Server |
Data Source
global cache CR block receive time * 10 / global cache current blocks received
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the average time, measured in hundredths of a second, to get a current block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-66 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
0.5 |
1 |
1 |
Global cache Current Block request time is %value% cs. |
Table 4-67 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every 3 Samples |
> |
0.5 |
1 |
1 |
Generated By Database Server |
Data Source
global cache current block send time * 10 / global cache current blocks served
This metric represents the average get time, measured in hundredths of a second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-68 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
0.3 |
0.6 |
1 |
Global cache gets time is %value% cs. |
Data Source
global cache get time * 10 / global cache gets
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of blocks that encountered a corruption or checksum failure during interconnect over the user-defined observation period.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-69 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
0 |
0 |
1* |
Total global cache blocks corrupt is %value%. |
Table 4-70 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every 3 Samples |
> |
0 |
0 |
1* |
Generated By Database Server |
* Once an alert is triggered for this metric, it must be manually cleared.
Data Source
global cache blocks corrupted
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of global cache blocks lost over the user-defined observation period.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-71 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every 3 Samples |
> |
1 |
3 |
1* |
Total global cache block lost is %value%. |
Table 4-72 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every 3 Samples |
> |
1 |
3 |
1* |
Generated By Database Server |
* Once an alert is triggered for this metric, it must be manually cleared.
Data Source
global cache blocks lost
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Health Check metrics
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-73 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Hours |
Not Uploaded |
= |
0 |
Not Defined |
1 |
The database is in the following maintenance states: %text%. |
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-74 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Hours |
Not Uploaded |
= |
0 |
Not Defined |
1 |
The database has been started and is in mounted state. |
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Hours |
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Hours |
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-75 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Hours |
Not Uploaded |
= |
Not Defined |
0 |
1 |
The instance is shutdown due to: %text%. |
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-76 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Hours |
Not Uploaded |
= |
0 |
Not Defined |
1 |
The database is not available due to the following conditions: %text%. |
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-77 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Hours |
Not Uploaded |
= |
0 |
Not Defined |
1 |
The instance has been started in no-mount state. |
The following is a list of the Idle Events.
ARCH random i/o
ARCH sequential i/o
KXFX: execution message dequeue - Slaves
LGWR random i/o
LGWR sequential i/o
LGWR wait for redo copy
Null event
PL/SQL lock timer
PX Deq Credit: need buffer
PX Deq: Execute Reply
PX Deq: Execution Msg
PX Deq: Index Merge Close
PX Deq: Index Merge Execute
PX Deq: Index Merge Reply
PX Deq: Join ACK
PX Deq: Msg Fragment
PX Deq: Par Recov Change Vector
PX Deq: Par Recov Execute
PX Deq: Par Recov Reply
PX Deq: Parse Reply
PX Deq: Table Q Normal
PX Deq: Table Q Sample
PX Deq: Txn Recovery Reply
PX Deq: Txn Recovery Start
PX Deque wait
PX Idle Wait
Queue Monitor Shutdown Wait
Queue Monitor Slave Wait
Queue Monitor Wait
RFS random i/o
RFS sequential i/o
RFS write
SQL*Net message from client
SQL*Net message from dblink
STREAMS apply coord waiting for slave message
STREAMS apply coord waiting for some work to finish
STREAMS apply slave idle wait
STREAMS capture process filter callback wait for ruleset
STREAMS fetch slave waiting for txns
WMON goes to sleep
async disk IO
client message
control file parallel write
control file sequential read
control file single write
db file single write
db file parallel write
dispatcher timer
gcs log flush sync
gcs remote message
ges reconfiguration to start
ges remote message
io done
jobq slave wait
lock manager wait for remote message
log file parallel write
log file sequential read
log file single write
parallel dequeue wait
parallel recovery coordinator waits for cleanup of slaves
parallel query dequeue
parallel query idle wait - Slaves
pipe get
pmon timer
queue messages
rdbms ipc message
recovery read
single-task message
slave wait
smon timer
statement suspended, wait error to be cleared
unread message
virtual circuit
virtual circuit status
wait for activate message
wait for transaction
wait for unread message on broadcast channel
wait for unread message on multiple broadcast channels
wakeup event for builder
wakeup event for preparer
wakeup event for reader
wakeup time manager
Metrics in this category collect the information of network interfaces used by cluster database instances as internode communication.
Cluster database instances should use private interconnects for internode communication. This metric monitors whether the network interface used by the cluster instance is a private one. If the network interface is known to be public, a critical alert is generated. If the network interface type is unknown, a warning alert is generated.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Interface Name" object.
If warning or critical threshold values are currently set for any "Interface Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Interface Name" object, use the Edit Thresholds page.
Data Source
V$CLUSTER_INTERCONNECTS
V$CONFIGURED_INTERCONNECTS
User Action
Use oifcfg in the CRS home to correctly configure the private interfaces in OCR.
Metrics in this category monitor the internode data transfer rate of cluster database instances.
This metric collects the internode communication traffic of a cluster database instance. This is an estimation using the following formula:
(gc cr blocks received/sec + gc current blocks received/sec + gc cr blocks served/sec + gc current blocks served/sec) * db_block_size + ( messags sent directly/sec + messages send indirectly/sec + messages received/sec ) * 200 bytes
The critical and warning threshold of this metric are not set by default. Users can set them according to the speed of their cluster interconnects.
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Instance Name" object.
If warning or critical threshold values are currently set for any "Instance Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Instance Name" object, use the Edit Thresholds page.
Data Source
V$SYSSTAT
V$DLM_MISC
V$PARAMETER
This metric category contains the metrics associated with invalid objects.
This metric represents the total invalid object count.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-78 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 24 Hours |
Not Uploaded |
> |
Not Defined |
Not Defined |
1 |
%value% object(s) are invalid in the database. |
This metric category contains the metrics that represent the number of invalid objects in each schema.
This metric represents the invalid object count by owner.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-79 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 24 Hours |
Not Uploaded |
> |
2 |
Not Defined |
1 |
%value% object(s) are invalid in the %owner% schema. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Invalid Object Owner" object.
If warning or critical threshold values are currently set for any "Invalid Object Owner" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Invalid Object Owner" object, use the Edit Thresholds page.
Data Source
For each metric index:
SELECT count(1)
User Action
View the status of the database objects in the schema identified by the Invalid Object Owner metric. Recompile objects as necessary.
Recovery metrics
This metric represents the count of corrupt data blocks.
Metric Summary 9iR2 or higher Evaluated and Collected every 15 minutes Operator > Warning Threshold - 0 Critical Threshold - Not Defined Number of corrupt data blocks is %value%.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-80 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Number of corrupt data blocks is %value%. |
Data Source
SELECT count(unique(file#)) FROM v$database_block_corruption;
User Action
Perform a database recovery.
This metric represents the count of missing media files.
Metric Summary 8i or higher Evaluated and Collected every 15 minutes Operator > Warning Threshold - 0 Critical Threshold - Not Defined Number of missing media files is %value%.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-81 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
1 |
Number of missing media files is %value%. |
Data Source
SELECT count(file#) FROM v$datafile_header WHERE recover ='YES' OR error is not null;
User Action
You should perform a database recovery.
This metric category contains the recovery area metrics.
This metric is evaluated by the server periodically every 15 minutes or during a file creation, whichever occurs first. It is also printed in the alert log. The Critical Threshold is set for less than 3% and the Warning Threshold is set for less than 15%. It is not user customizable. The user is alerted the first time the alert occurs and the alert is not cleared until the available space rises above 15%.
This metric represents the recovery area free space as a percentage.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics that represent the responsiveness of the Oracle Server, with respect to a client.
This metric represents the state of the database.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-82 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
After Every Sample |
CONTAINS |
MOUNTED |
Not Defined |
1 |
The database status is %value%. |
This metric checks whether a new connection can be established to a database. If the maximum number of users is exceeded or the listener is down, this test is triggered.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-83 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
After Every Sample |
= |
Not Defined |
0 |
1 |
Failed to connect to database instance: %oraerr%. |
Data Source
Perl returns 1 when a connection can be made to the database (using Management Agent monitoring connection details), 0 otherwise.
User Action
Check the status of the listener to make sure it is running on the node where the event was triggered. If the listener is running, check to see if the number of users is at the session limit. Note: The choice of user credentials for the Probe metric should be considered. If the preferred user has the RESTRICED SESSION privilege, the user will be able to connect to a database even if the LICENSE_MAX_SESSIONS limit is reached.
This metric represents the amount of time the agent takes to make a connection to the database, measured in milliseconds.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-84 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
After Every Sample |
> |
1000 |
Not Defined |
6 |
User logon time is %value% msecs. |
Data Source
Number of milliseconds (as measured in the Perl script) to connect to the database.
User Action
No user action is necessary.
Oracle uses the Automatic Segment Advisor job to detect segment issues regularly within maintenance windows. It determines whether the segments have unused space that can be released. The Number of recommendations is the number of segments that have Reclaimable Space. The recommendations come from all runs of the automatic segment advisor job and any user scheduled segment advisor jobs.
Oracle uses the Automatic Segment Advisor job to detect segment issues regularly within maintenance windows. It determines whether the segments have unused space that can be released. The Number of recommendations is the number of segments that have Reclaimable Space. The recommendations come from all runs of the automatic segment advisor job and any user scheduled segment advisor jobs.
User Action
Oracle recommends shrinking or reorganizing these segments to release unused space.
This metric category contains the metrics that represent the number of resumable sessions that are suspended due to some correctable error.
This metric represents the session suspended by data object limitation.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the session suspended by quota limitation.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the session suspended by rollback segment limitation.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics that represent the percentage of the various pools in the SGA that are being wasted.
This metric represents the percentage of the Java Pool that is currently marked as free.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-85 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of the Java pool is free. |
10.1.0.x |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of the Java pool is free. |
Data Source
((Free/Total)*100) where:
Free: select sum(decode(name,'free memory',bytes)) from v$sgastat where pool = 'java pool'
Total: select sum(bytes) from v$sgastat where pool = 'java pool'
User Action
If this pool size is too small, the database JVM (Java Virtual Machine) may not have sufficient memory to satisfy future calls, leading potentially to unexpected database request failures.
This metric represents the percentage of the Large Pool that is currently marked as free.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-86 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of the large pool is free. |
10.1.0.x |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of the large pool is free. |
Data Source
((Free/Total)*100) where:
Free: select sum(decode(name,'free memory',bytes)) from v$sgastat where pool = 'large pool'
Total: select sum(bytes) from v$sgastat where pool = 'large pool'
User Action
Consider enlarging the large pool or utilizing it more sparingly. This reduces the possibility of large memory areas competing with the library cache and dictionary cache for available memory in the shared pool.
This metric represents the percentage of the Shared Pool that is currently marked as free.
This test checks the percentage of Shared Pool that is currently free. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-87 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of the shared pool is free. |
Table 4-88 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 15 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
((Free/Total)*100) where:
free: select sum(decode(name,'free memory',bytes)) from v$sgastat where pool = 'shared pool'
total: select sum(bytes) from v$sgastat where pool = 'shared pool'
User Action
If the percentage of Free Memory in the Shared Pool rises above 50%, too much memory has been allocated to the shared pool. This extra memory could be better utilized by other applications on the machine. In this case the size of the Shared Pool should be decreased. This can be accomplished by modifying the shared_pool_size initialization parameter.
This metric category contains the snapshot too old metrics.
This metric represents the snapshot too old because of the rollback segment limit.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the snapshot too old because of the tablespace limit.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics used to approximate the responsiveness of SQL.
Baseline SQL Response Time
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
Current SQL response time
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
SQL Response Time is the average elapsed time per execution of a representative set of SQL statements, relative to a baseline. It is expressed as a percentage.
This metric is unavailable in versions 8.1.7 and earlier.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-89 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
After Every Sample |
> |
500 |
Not Defined |
4 |
SQL response time is %value%%% of baseline. |
Data Source
PL/SQL packaged procedure mgmt_response.get_metric_curs
User Action
If the SQL Response Time is less than 100%, then SQL statements are taking less time to execute when compared to the baseline. Response Time greater than 100% indicates that the database is not performing well when compared to the baseline.
SQL Response Time is a percentage of the baseline, not a simple percentage. So, for example, 100% of baseline means the SQL Response Time is the same as the baseline. 200% of baseline means the SQL Response Time is two times slower than the baseline. 50% of baseline means SQL Response Time is two times faster than baseline. A warning threshold of 200% indicates that the database is two times slower than the baseline, while a critical threshold of 500% indicates the database is 5 times slower than the baseline.
Representative statements are selected when two V$SQL snapshots are taken. All calculations are based on the deltas between these two snapshots. First, the median elapsed_time/execution for all statements that were executed in the time interval between the two snapshots are calculated. Then all statements that have an elapsed_time/execution > median elapsed_time/execution are taken, and the top 25 most frequently executed statements are displayed.
Pre-requisites for Monitoring SQL Response Time
Some tables and a PL/SQL package must be installed on the monitored database. This can be done by going to the database targets page and pressing the Configure button for your database. If a database has not been configured, the message "Not configured" will be displayed for SQL Response Time.
Configuring the Baseline
The baseline is configured on demand, automatically. The first time the agent calls the stored procedure to get the value of the metric, a snapshot of V$SQL is taken. The second time, another snapshot is taken. Then the representative statements are picked and stored in a table. The next time the agent requests the value of the metric, we are able to calculate and return the relative SQL response time.
Because of baseline configuration, there will be a delay between the time the database is configured and the value of the metric is displayed. During this period, the message "Not available" will be displayed for SQL Response Time.
Enterprise Manager will automatically configure the baseline against which SQL Response Time will be compared. However, in order for the SQL Response Time metric to be truly representative, the DBA must reconfigure the baseline at a time when the load on the database is typical.
To reconfigure the baseline, click on the link titled "Compared to Baseline" located next to the SQL Response Time value on the Database Home Page. The SQL statements used for tracking the SQL Response Time and baseline values are displayed. Click Reset Baseline. This clears the list of statements and the baseline values. Enterprise Manager will then automatically reconfigure the baseline within minutes.
If the database was lightly loaded at the time the baseline was taken, then the metric can indicate that the database is performing poorly under typical load when such is not the case. In this case, the DBA must reset the baseline. If the DBA has never manually reset the baseline, then the metric value will not be representative.
This metric shows statistics about the transactions processed by the coordinator process of each apply process. The Total Number of Transactions Received field shows the total number of transactions received by a coordinator process. The Number of Transactions Assigned field shows the total number of transactions assigned by a coordinator process to apply servers. The Total Number of Transactions Applied field shows the total number of transactions successfully applied by the apply process.
The values for an apply process are reset to zero if the apply process is restarted.
This metric shows statistics about the total number of transactions assigned by the coordinator process to apply servers since the apply process last started. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
SELECT APPLY_NAME, TOTAL_RECEIVED, TOTAL_ASSIGNED, TOTAL_APPLIED FROM V$STREAMS_APPLY_COORDINATOR;
The TOTAL_ASSIGNED column in the following query shows this metric for an apply process:
User Action
When an apply process is enabled, monitor this metric to ensure that the apply process assigning transactions to apply servers.
This metric shows statistics about the total number of transactions applied by the apply process since the apply process last started. For the target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The TOTAL_APPLIED column in the following query shows this metric for an apply process:
SELECT APPLY_NAME, TOTAL_RECEIVED, TOTAL_ASSIGNED, TOTAL_APPLIED FROM V$STREAMS_APPLY_COORDINATOR;
User Action
When an apply process is enabled, monitor this metric to ensure that the apply process is applying transactions.
This metric shows statistics about the total number of transactions received by the coordinator process since the apply process last started. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The TOTAL_RECEIVED column in the following query shows this metric for an apply process:
SELECT APPLY_NAME, TOTAL_RECEIVED, TOTAL_ASSIGNED, TOTAL_APPLIED FROM V$STREAMS_APPLY_COORDINATOR;
User Action
When an apply process is enabled, monitor this metric to ensure that the apply process is receiving transactions.
This metric shows the current total number of messages in a buffered queue to be dequeued by each apply process and the total number of messages to be dequeued by each apply process that have spilled from memory into the persistent queue table.
This metric shows information about the number of messages in a buffered queue to be dequeued by the apply process. This number includes both messages in memory and messages spilled from memory. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The NUM_MSGS column in the following query shows this metric for an apply process:
SELECT APPLY_NAME, S.NUM_MSGS NUM_MSGS, Q.SPILL_MSGS SPILL_MSGS FROM DBA_APPLY A, V$BUFFERED_QUEUES Q,V$BUFFERED_SUBSCRIBERS S WHERE A.QUEUE_NAME = S.QUEUE_NAME AND A.QUEUE_OWNER = S.QUEUE_SCHEMA AND A.QUEUE_NAME = Q.QUEUE_NAME AND A.QUEUE_OWNER = Q.QUEUE_SCHEMA AND S.SUBSCRIBER_ADDRESS IS NULL;
User Action
When an apply process is enabled, monitor this metric to ensure that the apply process is dequeuing messages.
This metric shows information about the number of messages to be dequeued by the apply process that have spilled from memory to the queue table. Messages in a buffered queue can spill from memory into the queue table if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The SPILL_MSGS column in the following query shows this metric for an apply process:
SELECT APPLY_NAME, S.NUM_MSGS NUM_MSGS, Q.SPILL_MSGS SPILL_MSGS FROM DBA_APPLY A, V$BUFFERED_QUEUES Q,V$BUFFERED_SUBSCRIBERS S WHERE A.QUEUE_NAME = S.QUEUE_NAME AND A.QUEUE_OWNER = S.QUEUE_SCHEMA AND A.QUEUE_NAME = Q.QUEUE_NAME AND A.QUEUE_OWNER = Q.QUEUE_SCHEMA AND S.SUBSCRIBER_ADDRESS IS NULL;
User Action
The number of spilled messages should be kept as low as possible for the best performance. A high number of spilled messages might result in the following cases:
There might be a problem with an apply process that applies messages captured by the capture process. When this happens, the number of messages can build in a queue because they are not being consumed. In this case, make sure the relevant apply processes are enabled, and correct any problems with these apply processes.
The Streams pool may be too small to hold the captured messages. In this case, increase the size of the Streams pool. If the database is Oracle Database 10g release 2 (10.2) or higher, then you can configure Automatic Shared Memory Management to manage the size of the Streams pool automatically. Set the SGA_TARGET initialization parameter to use Automatic Shared Memory Management.
This metric shows the number of messages in a persistent queue in READY state and WAITING state for each apply process.
This metric shows the number of messages in a persistent queue that are ready to be dequeued by the apply process. The apply process has not yet attempted to dequeue these messages.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 10 Minutes |
Data Source
The data source includes the following data dictionary views: DBA_QUEUES, DBA_APPLY, and AQ$queue_table_name.
User Action
Monitor this metric to ensure that the apply process is dequeuing messages that are ready.
This metric shows the number of messages in a persistent queue that are waiting to be dequeued by the apply process. The apply process has attempted to dequeue these messages at least once, and the apply process failed. The apply process might attempt to dequeue a waiting message again.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 10 Minutes |
Data Source
The data source includes the following data dictionary views: DBA_QUEUES, DBA_APPLY, and AQ$queue_table_name.
User Action
The messages in WAITING might have been enqueued with a delay attribute set. In this case, after the specified delay period is finished, the messages will be ready to dequeue.
The reader server for an apply process dequeues messages from the queue. The reader server is a parallel execution server that computes dependencies between LCRs and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator, which assigns them to idle apply servers.
This metric shows the total number of messages dequeued by the reader server for the apply process since the last time the apply process was started.
The reader server for an apply process dequeues messages from the queue. The reader server is a parallel execution server that computes dependencies between LCRs and assembles messages into transactions. The reader server then returns the assembled transactions to the coordinator, which assigns them to idle apply servers.
This metric shows the total number of messages dequeued by the reader server for the apply process since the last time the apply process was started. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The TOTAL_MESSAGES_DEQUEUED column in the following query shows this metric for an apply process:
SELECT APPLY_NAME, TOTAL_MESSAGES_DEQUEUED FROM V$STREAMS_APPLY_READER;
User Action
When an apply process is enabled, monitor this metric to ensure that the apply process is dequeuing messages.
This metric shows the number of messages captured and the number of messages enqueued by each capture process since the capture process last started.
The Total Messages Captured field shows the total number of redo entries passed by LogMiner to the capture process for detailed rule evaluation. A capture process converts a redo entry into a message and performs detailed rule evaluation on the message when capture process prefiltering cannot discard the redo entry. After detailed rule evaluation, the message is enqueued if it satisfies the capture process rule sets, or the message is discarded if it does not satisfy the capture process rule sets. The Total Messages Enqueued field shows the total number of messages enqueued. The number of captured messages captured can be higher than the number of enqueued messages.
The total messages enqueued includes enqueued logical change records (LCRs) that encapsulate data manipulation language (DML) and data definition language (DDL) changes. The total messages enqueued also includes messages that contain transaction control statements. These messages contain directives such as COMMIT and ROLLBACK. Therefore, the total messages enqueued is higher than the number of row changes and DDL changes enqueued by a capture process.
This metric shows information about the number of redo entries passed by LogMiner to the capture process for detailed rule evaluation. A capture process converts a redo entry into a message and performs detailed rule evaluation on the message when capture process prefiltering cannot discard the change.
After detailed rule evaluation, the message is enqueued if it satisfies the capture process rule sets, or the message is discarded if it does not satisfy the capture process rule sets.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The TOTAL_MESSAGES_CAPTURED column in the following query shows this metric for a capture process:
SELECT CAPTURE_NAME, TOTAL_MESSAGES_CAPTURED, TOTAL_MESSAGES_ENQUEUED FROM V$STREAMS_CAPTURE;
User Action
When a capture process is enabled, monitor this metric to ensure that the capture process is scanning redo entries.
This metric shows information about the number of messages enqueued by a capture process. The number of messages enqueued includes logical change records (LCRs) that encapsulate data manipulation language (DML) and data definition language (DDL) changes. The number of messages enqueued also includes messages that contain transaction control statements. These messages contain directives such as COMMIT and ROLLBACK. Therefore, the number of messages enqueued is higher than the number of row changes and DDL changes enqueued by a capture process.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The TOTAL_MESSAGES_ENQUEUED column in the following query shows this metric for a capture process:
SELECT CAPTURE_NAME, TOTAL_MESSAGES_CAPTURED, TOTAL_MESSAGES_ENQUEUED FROM V$STREAMS_CAPTURE;
User Action
When a capture process is enabled, monitor this metric to ensure that the capture process is enqueuing messages. If you know that there were source database changes that should be captured by the capture process, and the capture process is not capturing these changes, then there might be a problem with the rules used by the capture process.
This metric shows the current total number of messages in a buffered queue that were enqueued by each capture process and the total number of messages enqueued by each capture process that have spilled from memory into the persistent queue table.
If queue publishers other than the capture process enqueue messages into a buffered queue, then the values shown can include messages from these other queue publishers.
This metric shows information about the number of messages enqueued by a capture process in a buffered queue. This number includes both messages in memory and messages spilled from memory.
If queue publishers other than the capture process enqueue messages into a buffered queue, then the values shown can include messages from these other queue publishers.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The NUM_MSGS column in the following query shows this metric for a capture process:
SELECT CAPTURE_NAME, P.NUM_MSGS NUM_MSGS, Q.SPILL_MSGS SPILL_MSGS FROM V$BUFFERED_PUBLISHERS P, V$BUFFERED_QUEUES Q, DBA_CAPTURE C WHERE C.QUEUE_NAME = P.QUEUE_NAME AND C.QUEUE_OWNER = P.QUEUE_SCHEMA AND C.QUEUE_NAME = Q.QUEUE_NAME AND C.QUEUE_OWNER = Q.QUEUE_SCHEMA AND C.CAPTURE_NAME = P.SENDER_NAME AND P.SENDER_ADDRESS IS NULL AND P.SENDER_PROTOCOL = 1;
User Action
When a capture process is enabled, monitor this metric to ensure that the capture process enqueuing messages.
This metric shows information about the number of messages enqueued by a capture process that have spilled from memory to the queue table. Messages in a buffered queue can spill from memory into the queue table if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages.
If queue publishers other than the capture process enqueue messages into a buffered queue, then the values shown can include messages from these other queue publishers.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The SPILL_MSGS column in the following query shows this metric for a capture process:
SELECT CAPTURE_NAME, P.NUM_MSGS NUM_MSGS, Q.SPILL_MSGS SPILL_MSGS FROM V$BUFFERED_PUBLISHERS P, V$BUFFERED_QUEUES Q, DBA_CAPTURE C WHERE C.QUEUE_NAME = P.QUEUE_NAME AND C.QUEUE_OWNER = P.QUEUE_SCHEMA AND C.QUEUE_NAME = Q.QUEUE_NAME AND C.QUEUE_OWNER = Q.QUEUE_SCHEMA AND C.CAPTURE_NAME = P.SENDER_NAME AND P.SENDER_ADDRESS IS NULL AND P.SENDER_PROTOCOL = 1;
User Action
The number of spilled messages should be kept as low as possible for the best performance. A high number of spilled messages can result in the following cases:
There might be a problem with a propagation that propagates the messages captured by the capture process, or there might be a problem with an apply process that applies messages captured by the capture process. When this happens, the number of messages can build in a queue because they are not being consumed. In this case, make sure the relevant propagations and apply processes are enabled, and correct any problems with these propagations and apply processes.
The Streams pool might be too small to hold the captured messages. In this case, increase the size of the Streams pool. If the database is Oracle Database 10g release 2 (10.2) or higher, then you can configure Automatic Shared Memory Management to manage the size of the Streams pool automatically. Set the SGA_TARGET initialization parameter to use Automatic Shared Memory Management.
This metric shows the total number of Streams capture processes, propagations, and apply processes at the local database. This metric also shows the number of capture processes, propagations, and apply processes that have encountered errors.
This metric shows the number of apply processes that have encountered errors at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_APPLY data dictionary view.
User Action
If an apply process has encountered errors, then correct the conditions that caused the errors.
This metric shows the number of capture processes that have encountered errors at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_CAPTURE data dictionary view.
User Action
If a capture process has encountered errors, then correct the conditions that caused the errors.
This metric shows the number of apply processes at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_APPLY data dictionary view.
User Action
Use this metric to determine the total number of apply processes at the local database.
This metric shows the number of capture processes at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_CAPTURE data dictionary view.
User Action
Use this metric to determine the total number of capture processes at the local database.
This metric shows the number of propagations at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_PROPAGATION data dictionary view.
User Action
Use this metric to determine the total number of propagations at the local database.
This metric shows the number of propagations that have encountered errors at the local database. For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The information in this metric is in the DBA_PROPAGATION data dictionary view.
User Action
If a propagation has encountered errors, then correct the conditions that caused the errors.
This metric shows the total number of messages and kilobytes propagated by each propagation from a buffered queue at the local database.
This metric shows the total number of messages propagated by the propagation from a buffered at the local database.
Data Source
The TOTAL_BYTES column in the following query shows this metric for a propagation:
SELECT PROPAGATION_NAME, TOTAL_NUMBER, TOTAL_BYTES/1024 KBYTES FROM DBA_PROPAGATION P, DBA_QUEUE_SCHEDULES Q WHERE P.SOURCE_QUEUE_NAME = Q.QNAME AND P.SOURCE_QUEUE_OWNER = Q.SCHEMA AND MESSAGE_DELIVERY_MODE = 'BUFFERED';
User Action
When a propagation is enabled, monitor this metric to ensure that the propagation is propagating messages.
This metric shows the total number of messages propagated by the propagation from a buffered queue at the local database.
Data Source
The TOTAL_NUMBER column in the following query shows this metric for a propagation:
SELECT PROPAGATION_NAME, TOTAL_NUMBER, TOTAL_BYTES/1024 KBYTES FROM DBA_PROPAGATION P, DBA_QUEUE_SCHEDULES Q WHERE P.SOURCE_QUEUE_NAME = Q.QNAME AND P.SOURCE_QUEUE_OWNER = Q.SCHEMA AND MESSAGE_DELIVERY_MODE = 'BUFFERED';
User Action
When a propagation is enabled, monitor this metric to ensure that the propagation is propagating messages.
This metric shows the number of messages in a buffered queue in READY state for each propagation.
This metric shows the number of messages in a buffered source queue that are ready to be propagated by the propagation to the destination queue. The propagation has not yet attempted to propagate these messages.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The NUM_MSGS column in the following query shows this metric for a propagation:
SELECT PROPAGATION_NAME, NUM_MSGS READY FROM V$BUFFERED_SUBSCRIBERS, DBA_PROPAGATION WHERE SUBSCRIBER_NAME IS NULL AND SUBSCRIBER_ADDRESS = DESTINATION_DBLINK AND QUEUE_SCHEMA = SOURCE_QUEUE_OWNER AND QUEUE_NAME = SOURCE_QUEUE_NAME;
User Action
Monitor this metric to ensure that the propagation is propagating messages that are ready.
This metric shows the total number of messages and kilobytes propagated by each propagation from a persistent queue at the local database.
This metric shows the total number of kilobytes propagated by the propagation from a persistent queue at the local database.
Data Source
The TOTAL_BYTES column in the following query shows this metric for a propagation:
SELECT PROPAGATION_NAME, TOTAL_NUMBER, TOTAL_BYTES/1024 KBYTES FROM DBA_PROPAGATION P, DBA_QUEUE_SCHEDULES Q WHERE P.SOURCE_QUEUE_NAME = Q.QNAME AND P.SOURCE_QUEUE_OWNER = Q.SCHEMA AND MESSAGE_DELIVERY_MODE = 'PERSISTENT';
User Action
When a propagation is enabled, monitor this metric to ensure that the propagation is propagating messages.
This metric shows the total number of messages propagated by the propagation from a persistent queue at the local database.
Data Source
The TOTAL_NUMBER column in the following query shows this metric for a propagation:
SELECT PROPAGATION_NAME, TOTAL_NUMBER, TOTAL_BYTES/1024 KBYTES FROM DBA_PROPAGATION P, DBA_QUEUE_SCHEDULES Q WHERE P.SOURCE_QUEUE_NAME = Q.QNAME AND P.SOURCE_QUEUE_OWNER = Q.SCHEMA AND MESSAGE_DELIVERY_MODE = 'PERSISTENT';
User Action
When a propagation is enabled, monitor this metric to ensure that the propagation is propagating messages.
This metric shows the number of messages in a persistent queue in READY state and WAITING state for each propagation.
This metric shows the number of messages in a persistent source queue that are ready to be propagated by the propagation to the destination queue. The propagation has not yet attempted to propagate these messages.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The data source includes the following data dictionary views: DBA_QUEUES, DBA_PROPAGATION, and AQ$queue_table_name.
User Action
Monitor this metric to ensure that the propagation is propagating messages that are ready.
This metric shows the number of messages in a persistent source queue that are waiting to be propagated by the propagation to the destination queue. The propagation has attempted to propagate these messages at least once, and the propagation failed. The propagation might attempt to propagate a waiting message again after a specified retry delay interval.
For target version 10.1.0.x, the collection frequency for this metric is every 10 minutes.
Data Source
The data source includes the following data dictionary views: DBA_QUEUES, DBA_PROPAGATION, and AQ$queue_table_name.
User Action
Common failures that prevent message propagation include the following:
Database link failed
Remote database is not available
Remote queue does not exist
Remote queue was not started
Security violation while trying to enqueue messages into remote queue
Determine the problem that is causing propagation to fail, and correct the problem.
This metric category contains the metrics that represent the number of resumable sessions that are suspended due to some correctable error.
This metric represents the number of resumable sessions currently suspended in the database.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-90 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
9.0.1.x; 9.2.0.x |
Every 5 Minutes |
Not Uploaded |
> |
0 |
Not Defined |
1 |
%value% session(s) are suspended. |
Data Source
SELECT count(*) FROM v$resumable WHERE status = 'SUSPENDED' and enabled = 'YES'
User Action
Query the v$resumable view to see what the correctable errors are that are causing the suspension. The way to correct each error depends on the nature of the error.
This metric category contains the system response time metrics.
This metric represents the average time taken for each call (both user calls and recursive calls) within the database. A change in this value indicates that either the workload has changed or that the database�s ability to process the workload has changed because of either resource constraints or contention.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-91 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Not Defined |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric category contains the metrics that represent the number of sessions waiting.
This metric represents the number of sessions waiting at the sample time.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-92 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
%value% sessions are waiting. |
Data Source
SELECT count(*) FROM v$session_wait WHERE wait_time = 0 and event not in IdleEvents
See the Idle Events section in this chapter.
User Action
When this count is high, the system is doing more waiting than anything else. Evaluate the various types of wait activity using the real-time and historical performance monitoring capabilities of Enterprise Manager.
The metrics in this metric category check the amount of space used and the amount of space allocated to each tablespace. The used space can then be compared to the allocated space to determine how much space is unused in the tablespace. This metric is not intended for alerts. Rather it is intended for reporting. Historical views of unused allocated free space can help DBAs to correctly size their tablespaces, eliminating wasted space.
The allocated space of a tablespace is the sum of the current size of its datafiles. A portion of this allocated space is used to store data while some may be free space. If segments are added to a tablespace, or if existing segments grow, they will use the allocated free space. The allocated free space is only available to segments within the tablespace. If, over time, the segments within a tablespace are not using this free space, then the allocated free space is being unused.
This metric calculates the space allocated for each tablespace. It is not intended to generate alerts. Rather it should be used in conjunction with the Allocated Space Used (MB) metric to produce an historical view of the amount of space being used and unused by each tablespace.
For all target versions, the collection frequency for this metric is every 7 hours.
Data Source
Tablespace Allocated Space (MB) is calculated by looping though the tablespaces data files and totalling the size of the data files.
The allocated space of a tablespace is the sum of the current size of its datafiles. Some of this allocated space is used to store data and some of it may be free space. If segments are added to a tablespace, or if existing segments grow, they will use the allocated free space. The allocated free space is only available to segments within the tablespace. If, over time, the segments within a tablespace are not using this free space, then the allocated free space is being wasted.
This metric calculates the space used for each tablespace. It is not intended to generate alerts. Rather it should be used in conjunction with the Tablespace Allocated Space (MB) metric to produce an historical view of the amount of space being used and unused by each tablespace.
For all target versions, the collection frequency for this metric is every 7 hours.
Data Source
Tablespace Used Space (MB) is Tablespace Allocated Space (MB) Tablespace Allocated Free Space (MB) where:
Tablespace Allocated Space (MB) is calculated by looping through the tablespaces data files and totaling the size of the data files.
Tablespace Allocated Free Space (MB) is calculated by looping through the tablespaces data files and totaling the size of the free space in each data file.
The metrics in this metric category check for the amount of space used by each tablespace. The used space is then compared to the available free space to determine tablespace fullness. The available free space takes into account the maximum data file size as well as available disk space. This means that a tablespace will not be flagged as full if datafiles can extend and there is enough disk space available for them to extend.
As segments within a tablespace grow, the available free space decreases. If there is no longer any available free space, meaning datafiles have hit their maximum size or there is no more disk space, then the creation of new segments or the extension of existing segments will fail.
This metric checks for the total available free space in each tablespace. This metric is intended for larger tablespaces, where the Available Space Used (%) metric is less meaningful. If the available free space falls below the size specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-93 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 30 Minutes |
After Every Sample |
less than or equal to |
Not Defined |
Not Defined |
1 |
Tablespace [%name%] has [%value% mbytes] free |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
MaximumSize - Total Used Space where:
TotalUsedSpace: total used space in MB of tablespace
MaximumSize: Maximum size (in MB) of the tablespace. The maximum size is determined by looping through the tablespaces data files, as well as additional free space on the disk that would be available for the tablespace should a data file autoextend.
User Action
Perform one of the following:
Increase the size of the tablespace by: Enabling automatic extension for one of its existing data files, Manually resizing one of its existing data files, or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
Relocate segments to another tablespace, thus increasing the free space in this tablespace.
Run the Segment Advisor on the tablespace.
As segments within a tablespace grow, the available free space decreases. If there is no longer any available free space, meaning datafiles have hit their maximum size or there is no more disk space, then the creation of new segments or the extension of existing segments will fail.
This metric checks the Available Space Used (%) for each tablespace. If the percentage of used space is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-94 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 30 Minutes |
After Every Sample |
>= |
85 |
97 |
1 |
Tablespace [%name%] is [%value% percent] full |
|
Every 30 Minutes |
After Every Sample |
>= |
85 |
97 |
1 |
Not Defined |
Table 4-95 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every 10 Minutes |
Every 30 Minutes |
After Every Sample |
>= |
85 |
97 |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
(TotalUsedSpace / MaximumSize) * 100 where:
TotalUsedSpace: total used space in MB of tablespace
MaximumSize: Maximum size (in MB) of the tablespace. The maximum size is determined by looping through the tablespaces data files, as well as additional free space on the disk that would be available for the tablespace should a data file autoextend.
For additional information about the data source, refer to the fullTbsp.pl Perl script located in the sysman/admin/scripts directory.
User Action
Perform one of the following:
Increase the size of the tablespace by: Enabling automatic extension for one of its existing data files, Manually resizing one of its existing data files, or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
Relocate segments to another tablespace, thus increasing the free space in this tablespace.
Run the Segment Advisor on the tablespace.
The metrics in this metric category check for the amount of space used by each tablespace. The used space is then compared to the available free space to determine tablespace fullness. The available free space takes into account the maximum data file size as well as available disk space. This means that a tablespace will not be flagged as full if datafiles can extend and there is enough disk space available for them to extend.
As segments within a tablespace grow, the available free space decreases. If there is no longer any available free space, meaning datafiles have hit their maximum size or there is no more disk space, then the creation of new segments or the extension of existing segments will fail.
This metric checks for the total available free space in each tablespace. This metric is intended for larger tablespaces, where the Available Space Used (%) metric is less meaningful. If the available free space falls below the size specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-96 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 30 Minutes |
After Every Sample |
less than or equal to |
Not Defined |
Not Defined |
1 |
Tablespace [%name%] has [%value% mbytes] free |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
MaximumSize - Total Used Space where:
TotalUsedSpace: total used space in MB of tablespace
MaximumSize: Maximum size (in MB) of the tablespace. The maximum size is determined by looping through the tablespaces data files, as well as additional free space on the disk that would be available for the tablespace should a data file autoextend.
User Action
Perform one of the following:
Increase the size of the tablespace by: Enabling automatic extension for one of its existing data files, Manually resizing one of its existing data files, or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
Relocate segments to another tablespace, thus increasing the free space in this tablespace.
Run the Segment Advisor on the tablespace.
As segments within a tablespace grow, the available free space decreases. If there is no longer any available free space, meaning datafiles have hit their maximum size or there is no more disk space, then the creation of new segments or the extension of existing segments will fail.
This metric checks the Available Space Used (%) for each tablespace. If the percentage of used space is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-97 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.1.0.x |
Every 30 Minutes |
After Every Sample |
>= |
85 |
97 |
1 |
Tablespace [%name%] is [%value% percent] full |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
(TotalUsedSpace / MaximumSize) * 100 where:
TotalUsedSpace: total used space in MB of tablespace
MaximumSize: Maximum size (in MB) of the tablespace. The maximum size is determined by looping through the tablespace�s data files, as well as additional free space on the disk that would be available for the tablespace should a data file autoextend.
User Action
Perform one of the following:
Increase the size of the tablespace by: Enabling automatic extension for one of its existing data files, Manually resizing one of its existing data files, or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
Relocate segments to another tablespace, thus increasing the free space in this tablespace.
Run the Segment Advisor on the tablespace.
The metrics in this metric category check for the following:
The largest chunk-free space in the tablespace. If any table, index, cluster, or rollback segment within the tablespace cannot allocate one additional extent, then an alert is generated.
Whether any of the segments in the tablespace are approaching their maximum extents. If, for any segment, the maximum number of extents minus the number of existing extents is less than 2, then an alert is generated.
Only the tablespaces with problem segments are returned as results.
Segments which are nearing the upper limit of maximum extents. For all target versions, the collection frequency for this metric is every 24 hours.
Data Source
The first 10 segments names which are approaching their MaxExtent in the tablespace.
User Action
If possible, increase the value of the segments MAXEXTENTS storage parameter.
Otherwise, rebuild the segment with a larger extent size ensuring the extents within a segment are the same size by specifying STORAGE parameters where NEXT=INITIAL and PCTINCREASE = 0.
For segments that are linearly scanned, choose an extent size that is a multiple of the number of blocks read during each multiblock read. This will ensure that the Oracle multiblock read capability is used efficiently.
This metric checks for segments which are nearing the upper limit of the number of maximum extents. If the number of segments is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-98 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 24 Hours |
After Every Sample |
> |
0 |
Not Defined |
1 |
%value% segments in %name% tablespace approaching max extents. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
Number of segments for which the maximum number of extents minus the number of existing extents is less than 2.
For additional information about the data source, refer to the problemTbsp.pl Perl script located in the sysman/admin/scripts directory.
User Action
If possible, increase the value of the segments MAXEXTENTS storage parameter.
Otherwise, rebuild the segment with a larger extent size ensuring the extents within a segment are the same size by using a locally managed tablespace. In the case of a dictionary managed tablespace, specify STORAGE parameters where NEXT=INITIAL and PCTINCREASE = 0.
Segments which cannot allocate an additional extent.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 24 Hours |
Data Source
The first 10 segments names which cannot allocate an additional extent in the tablespace.
User Action
Perform one of the following:
Increase the size of the tablespace by enabling automatic extension for one of its existing data files, manually resizing one of its existing data files. or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
This metric checks for segments which cannot allocate an additional extent. If the number of segments is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-99 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 24 Hours |
After Every Sample |
> |
0 |
Not Defined |
1 |
%value% segments in %name% tablespace unable to extend. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Tablespace Name" object.
If warning or critical threshold values are currently set for any "Tablespace Name" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Tablespace Name" object, use the Edit Thresholds page.
Data Source
After checking for the largest chunk free space in the tablespace, this is the number of segments which cannot allocate an additional extent.
For additional information about the data source, refer to the problemTbsp.pl Perl script located in the sysman/admin/scripts directory.
User Action
Perform one of the following:
Increase the size of the tablespace by enabling automatic extension for one of its existing data files, manually resizing one of its existing data files. or adding a new data file.
If the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace.
Relocate segments to another tablespace thus increasing the free space in this tablespace.
This metric category contains the metrics that represent rates of resource consumption, or throughput.
This metric represents the number of users logged on at the sampling time.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x | Every 5 Minutes |
Data Source
SELECT value FROM v$sysstat WHERE name = 'logons current';
User Action
No user action is necessary.
This metric represents the BG checkpoints per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-100 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-101 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Number of times per second an index branch block was split because of the insertion of an additional value.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-102 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-103 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
branch node splits / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per transaction an index branch block was split because of the insertion of an additional value.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-104 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-105 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
branch node splits / transaction
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of current blocks per second cloned to create consistent read (CR) blocks.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-106 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-107 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
CR blocks created / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of current blocks per transaction cloned to create consistent read (CR) blocks.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-108 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-109 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
CR blocks created / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per second a user process has applied rollback entries to perform a consistent read on the block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-110 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-111 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
consistent changes / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per transaction a user process has applied rollback entries to perform a consistent read on the block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-112 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-113 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
consistent changes / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per second a consistent read was requested for a block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-114 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-115 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
consistent gets / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per transaction a consistent read was requested for a block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-116 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-117 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
consistent gets / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of undo records applied for consistent read per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-118 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-119 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
current blocks converted for CR / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the consistent read undo records applied per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-120 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-121 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of logons per second during the sample period.
This test checks the number of logons that occurred per second during the sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-122 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
100 |
Not Defined |
2 |
Cumulative logon rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
>= |
100 |
Not Defined |
2 |
Not Defined |
Table 4-123 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
>= |
100 |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaLogons / Seconds where:
DeltaLogons: difference in 'select value from v$sysstat where name='logons cumulative'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
A high logon rate may indicate that an application is inefficiently accessing the database. Database logon's are a costly operation. If an application is performing a logon for every SQL access, that application will experience poor performance as well as affect the performance of other applications on the database. If there is a high logon rate try to identify the application that is performing the logons to determine if it could be redesigned such that session connections could be pooled, reused or shared.
This metric represents the number of logons per transaction during the sample period.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of logons that occurred per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-124 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Cumulative logon rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-125 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaLogons / Transactions where:
DeltaLogons: difference in 'select value from v$sysstat where name='logons cumulative'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
A high logon rate may indicate that an application is inefficiently accessing the database. Database logon's are a costly operation. If an application is performing a logon for every SQL access, that application will experience poor performance as well as affect the performance of other applications on the database. If there is a high logon rate try to identify the application that is performing the logons to determine if it could be redesigned such that session connections could be pooled, reused or shared.
This metric represents the total number of changes per second that were part of an update or delete operation that were made to all blocks in the SGA.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-126 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-127 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
db block changes / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of changes per transaction that were part of an update or delete operation that were made to all blocks in the SGA.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-128 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-129 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
db block changes / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per second a current block was requested.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-130 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-131 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
db block gets / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per transaction a current block was requested.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-132 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-133 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
db block gets / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times, per second, during this sample period DBWn was asked to scan the cache and write all blocks marked for a checkpoint.
The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.
When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWn process is to keep the buffer cache clean by writing dirty buffers to disk. As user processes dirty buffers, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers.
When the Oracle Server process cannot find a clean reusable buffer after scanning a threshold of buffers, it signals DBWn to write. When this request to make free buffers is received, DBWn writes the least recently used (LRU) buffers to disk. By writing the least recently used dirty buffers to disk, DBWn improves the performance of finding free buffers while keeping recently used buffers resident in memory. For example, blocks that are part of frequently accessed small tables or indexes are kept in the cache so that they do not need to be read in again from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that may be useful soon.
Additionally, DBWn periodically writes buffers to advance the checkpoint that is the position in the redo log from which crash or instance recovery would need to begin.
This test checks the number of times DBWR was asked to advance the checkpoint. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-134 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
DBWR checkpoint rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-135 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaCheckpoints / Seconds where:
DeltaCheckpoints: difference in 'select value from v$sysstat where name='DBWR checkpoints'' between sample end and start
Seconds: number of seconds in sample period
User Action
A checkpoint tells the DBWR to write out modified buffers to disk. This write operation is different from the make free request in that the modified buffers are not marked as free by the DBWR process. Dirty buffers may also be written to disk at this time and freed.
The write size is dictated by the _db_block_checkpoint_batch parameter. If writing, and subsequently waiting for checkpoints to complete is a problem, the checkpoint completed event displays in the Top Waits page sorted by Time Waited or the Sessions Waiting for this Event page.
If the database is often waiting for checkpoints to complete you may want to increase the time between checkpoints by checking the init.ora parameter db_block_checkpoint_batch: select name, value, is default from v$parameter where name = db_block_checkpoint_batch. The value should be large enough to take advantage of parallel writes. The DBWR uses a write batch that is calculated like this: (db_files * db_file_simultaneous_writes)/2 The write_batch is also limited by two other factors:
A port specific limit on the numbers of I/Os (compile time constant).
1/4 of the number of buffers in the SGA.
The db_block_checkpoint_batch is always smaller or equal to the _db_block_write_batch. You can also consider enabling the check point process.
This metric represents the number of times per second that a process detected a potential deadlock when exchanging two buffers and raised an internal, restartable error.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-136 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-137 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue deadlocks / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times per transaction that a process detected a potential deadlock when exchanging two buffers and raised an internal, restartable error.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-138 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-139 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue deadlocks / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of table or row locks acquired per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-140 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-141 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue requests / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of table or row locks acquired per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-142 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-143 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue requests / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of table and row locks (acquired and converted) per second that time out before they could complete.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-144 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-145 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue timeouts / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of table and row locks (acquired and converted) per transaction that timed out before they could complete.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-146 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-147 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue timeouts / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of waits per second that occurred during an enqueue convert or get because the enqueue get was deferred.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-148 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-149 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue waits / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of waits per transaction that occurred during an enqueue convert or get because the enqueue get was deferred.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-150 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-151 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
enqueue waits / transaction
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the rate of SQL command executions over the sampling interval.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-152 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Data Source
DeltaExecutions / Seconds where:
DeltaExecutions: difference in 'select value from v$sysstat where name='execute count'' between end and start of sample period.
Seconds: number of seconds in sample period
User Action
No user action is necessary.
This metric represents the percentage of statement executions that do not require a corresponding parse. A perfect system would parse all statements once and then execute the parsed statement over and over without reparsing. This ratio provides an indication as to how often the application is parsing statements as compared to their overall execution rate. A higher number is better.
This test checks the percentage of executes that do not require parses. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-153 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Only %value%%% of executes are performed without parses. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-154 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
((DeltaExecuteCount - (DeltaParseCountTotal)) / DeltaExecuteCount) * 100 where:
DeltaParseCountTotal: difference in 'select value from v$sysstat where name='parse count (total)'' between sample end and start
DeltaExecuteCount: difference in 'select value from v$sysstat where name='execute count'' between sample end and start
User Action
An execute to parse ratio of less than 70% indicates that the application may be parsing statements more often than it should. Reparsing the statement, even if it is a soft parse, requires a network round trip from the application to the database, as well as requiring the processing time to locate the previously compiled statement in the cache. Reducing network round trips and unnecessary processing improves application performance.
Use the Top Sessions page sorted by Parses to identify the sessions responsible for the bulk of the parse activity within the database. Start with these sessions to determine whether the application could be modified to make more efficient use of its cursors.
This metric represents the number of fast full index scans per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-155 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-156 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
index fast full scans (full) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of fast full index scans per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-157 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-158 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
index fast full scans (full) / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.
Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations that will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps, which the optimizer will evaluate when generating an execution plan for the cursor.
This test checks the number of parses of statements that were not already in the cache. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-159 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Hard parse rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-160 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaParses / Seconds where:
DeltaParses: difference in 'select value from v$sysstat where name='parse count (hard)'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
If there appears to be excessive time spent parsing, evaluate SQL statements to determine those that can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The Top Sessions page sorted by Hard Parses will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page may lead you to the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
This metric represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.
Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor. The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of hard parses per second during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-161 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Hard parse rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-162 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaParses / Transactions where:
DeltaParses: difference in 'select value from v$sysstat where name='parse count (hard)'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The Top Sessions page sorted by Hard Parses will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page may lead you to the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
Number of times per second an index leaf node was split because of the insertion of an additional value.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-163 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-164 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
leaf node splits / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
Number of times per transaction an index leaf node was split because of the insertion of an additional value.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-165 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-166 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
leaf node splits / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of bytes sent and received through the SQL Net layer to and from the database.
This test checks the network read/write per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the Number of Occurrences parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-167 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Bytes transmitted via SQL*Net is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-168 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(DeltaBytesFromClient+DeltaBytesFromDblink+DeltaBytesToClient+DeltaBytesToDblink) / Seconds where:
Delta Bytes From Client: difference in 'select s.value from v$sysstat s, visitation n where n.name='bytes received via SQL*Net from client' and n.statistic#=s.statistic#' between end and start of sample period
DeltaBytesFromClient: difference in 'select s.value from v$sysstat s, v$statname n where n.name='bytes received via SQL*Net from dblink' and n.statistic#=s.statistic#' between end and start of sample period
DeltaBytesFromClient: difference in 'select s.value from v$sysstat s, v$statname n where n.name='bytes sent via SQL*Net to client' and n.statistic#=s.statistic#' between end and start of sample period
DeltaBytesFromClient: difference in 'select s.value from v$sysstat s, v$statname n where n.name='bytes sent via SQL*Net to dblink' and n.statistic#=s.statistic#' between end and start of sample period
Seconds: number of seconds in sample period
User Action
This metric represents the amount of network traffic in and out of the database. This number may only be useful when compared to historical levels to understand network traffic usage related to a specific database.
This metric represents the total number of commits and rollbacks performed during this sample period.
This test checks the number of commits and rollbacks performed during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-169 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Transaction rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
>= |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-170 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
>= |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaCommits + DeltaRollbacks where:
DeltaCommits: difference of 'select value from v$sysstat where name='user commits'' between sample end and start
DeltaRollbacks: difference of 'select value from v$sysstat where name='user rollbacks'' between sample end and start
User Action
This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.
This metric represents the total number of cursors opened per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-171 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-172 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
opened cursors cumulative / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of cursors opened per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-173 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-174 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
opened cursors cumulative / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of parse failures per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-175 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-176 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
parse count (failures) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of parse failures per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-177 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-178 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
parse count (failures) / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of data blocks read from disk per second during this sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then searches the disk if it is not already in memory. Reading data blocks from disk is much more inefficient than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.
This test checks the data blocks read from disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-179 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Physical reads are %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-180 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaPhysicalReads / Seconds where:
DeltaPhysicalReads: difference in 'select s.value from v$sysstat s, v$statname n where n.name='physical reads' and n.statistic#=s.statistic#' between sample end and start
Seconds: number of seconds in sample period
User Action
Block reads are inevitable so the aim should be to minimize unnecessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield profound changes in performance. Tweaking at system level usually only achieves percentage gains.
To view I/O on a per session basis to determine which sessions are responsible for your physical reads, you should visit the Top Sessions page sorted by Physical Reads. This approach allows you to identify problematic sessions and then drill down to their current SQL statement and perform tuning from there.
To identify the SQL that is responsible for the largest portion of physical reads, visit the Top SQL page sorted by Physical Reads. This page allows you to quickly determine which SQL statements are the causing your I/O activity. From this display you can view the full text of the SQL statement.
The difference between the two methods for identifying problematic SQL is that the Top Sessions view displays sessions that are performing the most physical reads at the moment. The Top SQL view displays the SQL statements that are still in the SQL cache that have performed the most I/O over their lifetime. A SQL statement could show up in the Top SQL view that is not currently being executed.
If the SQL statements are properly tuned and optimized, consider the following suggestions. A larger buffer cache may help - test this by actually increasing DB_BLOCK_BUFFERS. Do not use DB_BLOCK_LRU_EXTENDED_STATISTICS, as this may introduce other performance issues. Never increase the SGA size if it may induce additional paging or swapping on the system.
A less obvious issue which can affect the I/Orates is how well data is clustered physically. For example, assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are: 1.Each of the table rows is in a different physical block (100 blocks need to be read for each index block). 2.The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block).
Pre-sorting or reorganizing data can improve this situation in severe situations as well.
This metric represents the number of disk reads per transaction during the sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then goes to disk if it is not in memory already. Reading data blocks from disk is much more expensive than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the data blocks read from disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-181 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Physical reads are %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-182 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaReads / Transactions where:
DeltaReads: difference in 'select value from v$sysstat where name='physical reads'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
Block reads are inevitable so the aim should be to minimize unnecessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance. Tweaking at system level usually only achieves percentage gains.
To identify the SQL that is responsible for the largest portion of physical reads, visit the Top SQL page sorted by Physical Reads. This view will allow you to quickly determine which SQL statements are causing the I/O activity. From this display you can view the full text of the SQL statement.
To view I/O on a per session basis to determine which sessions are responsible for your physical reads, you can visit the Top Sessions page sorted by Physical Reads. This approach allows you to identify problematic sessions and then drill down to their current SQL statement to perform tuning.
If the SQL statements are properly tuned and optimized the following suggestions may help. A larger buffer cache may help - test this by actually increasing DB_BLOCK_BUFFERS and not by using DB_BLOCK_LRU_EXTENDED_STATISTICS. Never increase the SGA size if it will induce additional paging or swapping on the system.
A less obvious issue which can affect the I/Orates is how well data is clustered physically. For example, assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are: 1. Each of the table rows is in a different physical block (100 blocks need to be read for each index block). 2. The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block).
Pre-sorting or reorganizing data can help to tackle this in severe situations as well.
This metric represents the number of direct physical reads per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-183 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-184 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical reads direct / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of direct physical reads per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-185 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-186 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical reads direct / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of direct large object (LOB) physical reads per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-187 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-188 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical reads direct (lob) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of direct large object (LOB) physical reads per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-189 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-190 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical reads direct (lob) / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of disk writes per second during the sample period. This statistic represents the rate of database blocks written from the SGA buffer cached to disk by the DBWR background process, and from the PGA by processes performing direct writes.
This test checks the data blocks written disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-191 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Physical writes are %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-192 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaWrites / Seconds where:
DeltaWrites: difference in 'select value from v$sysstat where name='physical writes'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
Because this statistic shows both DBWR writes as well as direct writes by sessions, you should view the physical writes directly to determine where the write activity is actually occurring. If the physical writes direct value comprises a large portion of the writes, then there are probably many sorts or writes to temporary tablespaces occurring.
If the majority of the writes are not direct, they are being performed by the DBWR writes process. This is only be a problem if log writer or redo waits are showing up in the Sessions Waiting for this Event page or the Top Waits page sorted by Time Waited.
This metric represents the number of disk writes per transaction during the sample period.
The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name is a better indicator of current performance.
This test checks the data blocks written disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-193 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Physical writes are %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-194 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaWrites / Transactions where:
DeltaWrites: difference in 'select value from v$sysstat where name='physical writes'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
Because this statistic shows both DBWR writes as well as direct writes by sessions, you should view the physical writes directly to determine where the write activity is really occurring. If the physical writes direct value comprises a large portion of the writes, then there are likely many sorts or writes to temporary tablespaces that are occurring.
If the majority of the writes are not direct, they are being performed by the DBWR writes process. This will typically only be a problem if log writer or redo waits are showing up in the Sessions Waiting for this Event page or the Top Waits page sorted by Time Waited.
This metric represents the number of direct physical writes per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-195 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-196 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical writes direct / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central on the Database Home page.
This metric represents the number of direct physical writes per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-197 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-198 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical writes direct / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of direct large object (LOB) physical writes per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-199 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-200 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical writes direct (lob) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of direct large object (LOB) physical writes per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-201 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-202 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
physical writes direct (lob) / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of recursive calls, per second during the sample period.
Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:
When data dictionary information is not available in the data dictionary cache and must be retrieved from disk
In the firing of database triggers
In the execution of DDL statements
In the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks
In the enforcement of referential integrity constraints
This test checks the number of recursive SQL calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-203 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Recursive call rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-204 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRecursiveCalls / Seconds where:
DeltaRecursiveCalls: difference in 'select value from v$sysstat where name='recursive calls'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle8i, considering taking advantage of locally managed tablespaces.
This metric represents the number of recursive calls, per second during the sample period.
Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:
When data dictionary information is not available in the data dictionary cache and must be retrieved from disk
In the firing of database triggers
In the execution of DDL statements
In the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks
In the enforcement of referential integrity constraints
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of calls that result in changes to internal tables. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-205 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Recursive call rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-206 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRecursiveCalls / Transactions where:
DeltaRecursiveCalls: difference in 'select value from v$sysstat where name='recursive calls'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle8i, considering taking advantage of locally managed tablespaces.
This metric represents the amount of redo, in bytes, generated per second during this sample period.
The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.
This test checks the amount of redo in bytes generated per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-207 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Redo generated is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-208 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRedoSize / Seconds where:
DeltaRedoSize: difference in 'select value from v$sysstat where name='redo size'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
The LOG_BUFFER initialization parameter determines the amount of memory that is used when redo entries are buffered to the redo log file.
Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.
This metric represents the amount of redo, in bytes, generated per transaction during this sample period.
The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.
The value of this statistic is zero if there have been no write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the amount of redo in bytes generated per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-209 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Redo generated is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-210 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRedoSize / DeltaTransactions where:
DeltaRedoSize: difference in 'select value from v$sysstat where name='redo size'' between end and start of sample period
Transactions: difference in 'select value from v$sysstat where name = 'user commits'' between end and start of sample period
User Action
The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.
Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.
This metric represents the number redo write operations per second during this sample period.
The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.
The log writer processes (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.
This test checks the number of writes by LGWR to the redo log files per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-211 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Redo write rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-212 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRedoWrites / Seconds where:
DeltaRedoWrites: difference in 'select value from v$sysstat where name='redo writes'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
The LOG_BUFFER initialization parameter determines the amount of memory that is used when redo entries are buffered to the redo log file.
Should waiting be a problem, consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.
This metric represents the number of redo write operations per second during this sample period.
The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.
The log writer process (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.
This test checks the number of writes by LGWR to the redo log files per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-213 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Redo write rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-214 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRedoWrites /(DeltaCommits+DeltaRollbacks) where:
DeltaRedoWrites: difference in 'select s.value from v$sysstat s, v$statname n where n.name='redo writes' and n.statistic#=s.statistic#' between sample end and start
DeltaCommits: difference in 'select s.value from v$sysstat s, v$statname n where n.name='user commits' and n.statistic#=s.statistic#' between sample end and sample start
DeltaRollbacks: difference in 'select s.value from v$sysstat s, v$statname n where n.name='user commits' and n.statistic#=s.statistic#' between sample end and sample start
User Action
The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.
Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.
This metric represents the average number of rows per sort during this sample period.
This test checks the average number of rows per sort during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-215 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Average sort size is %value% rows. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-216 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(DeltaSortRows / (DeltaDiskSorts + DeltaMemorySorts)) * 100 where:
DeltaSortRows: difference in 'select value from v$sysstat where name='sorts (rows)'' between sample end and start
DeltaMemorySorts: difference in 'select value from v$sysstat where name='sorts (memory)'' between sample end and start
DeltaDiskSorts: difference in 'select value from v$sysstat where name='sorts (disk)'' between sample end and start
User Action
This statistic displays the average number of rows that are being processed per sort. The size provides information about the sort size of the database. This can help you to determine the SORT_AREA_SIZE appropriately. If the rows per sort are high, you should investigate the sessions and SQL performing the most sorts to see if those SQL statements can be tuned to reduce the size of the sort sample set.
The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly and the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions page sorted by Disk Sorts.
Further drilldown into the session performing the most disk sorts with the Current SQL page displays the SQL statement responsible for the disk sorts.
The Top SQL page sorted by Sorts provides a mechanism to quickly display the SQL statements in the cache presented in sorted order by their number of sort operations. This is an alternative to viewing the sort of current sessions. It allows you to view sort activity via SQL statements and contains cumulative statistics for all executions of that statement.
If excessive sorts are taking place on disk and the queries are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area allows the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.
This metric represents the number of long table scans per second during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.
This test checks the long table scans per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-217 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Rate of scans on long tables is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-218 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaScans / Seconds where:
DeltaScans: difference in 'select value from v$sysstat where name='table scans (long tables)'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
A table scan means that the entire table is being scanned record by record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache this may be advantageous. But for larger tables this will force a lot of physical reads and potentially push other needed buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified either through the Top SQL page sorted by Physical Reads, or through the Top Sessions page sorted by Physical Reads, with a drilldown to the current SQL for a session.
This metric represents the number of long table scans per transaction during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of long table scans per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-219 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Rate of scans on long tables is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-220 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaScans / Transactions where:
DeltaScans: difference in 'select value from v$sysstat where name='table scans (long tables)'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
A table scan means that the entire table is being scanned record by record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache this may be advantageous. But for larger tables this will force a lot of physical reads and potentially push other needed buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified either through the Top SQL page sorted by Physical Reads, or through the Top Sessions page sorted by Physical Reads, with a drilldown to the current SQL for a session.
This metric represents the number of logical reads per second during the sample period. A logical read is a read request for a data block from the SGA. Logical reads may result in a physical read if the requested block does not reside with the buffer cache.
This test checks the logical(db block gets + consistent gets) reads per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-221 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Session logical reads are %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-222 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
LogicalReads / Seconds where:
LogicalReads: difference in 'select value from v$sysstat where name='session logical reads'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
Excessive logical reads, even if they do not result in physical reads, can still represent an area that should be considered for performance tuning. Typically large values for this statistic indicate that full table scans are being performed. To identify the SQL that is performing the most logical reads (buffer gets), use the Top SQL page sorted by Buffer Gets. This quickly identifies the SQL responsible for the bulk of the logical reads. You can further investigate these SQL statements via drilldowns. Tuning these SQL statements will reduce your buffer cache access.
This metric represents the number of logical reads per transaction during the sample period.
The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding per second metric of the same name will be a better indicator of current performance.
This test checks the logical (db block gets + consistent gets) reads per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-223 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Session logical reads are %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-224 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaReads / Transactions where:
DeltaReads: difference in 'select value from v$sysstat where name='session logical reads'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
Excessive logical reads, even if they do not result in physical reads, can still represent an area that should be considered for performance tuning. Typically large values for this statistic indicate that full table scans are being performed. To identify the SQL that is performing the most logical reads (buffer gets) use the Top SQL page sorted by Buffer Gets. This quickly identifies the SQL responsible for the bulk of the logical reads.
A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.
This metric represents the percentage of parse requests where the cursor was already in the cursor cache compared to the number of total parses. This ratio provides an indication as to how often the application is parsing statements that already reside in the cache as compared to hard parses of statements that are not in the cache.
This test checks the percentage of soft parse requests to total parse requests. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-225 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Only %value%%% of parses are soft parses. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-226 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
((DeltaParseCountTotal - DeltaParseCountHard) / DeltaParseCountTotal) * 100 where:
DeltaParseCountTotal: difference in 'select value from v$sysstat where name='parse count (total)'' between sample end and start
DeltaParseCountHard: difference in 'select value from v$sysstat where name='parse count (hard)'' between sample end and start
User Action
Soft parses consume less resources than hard parses, so the larger the value for this item, the better. But many soft parses indicate the application is using SQL inefficiently. Reparsing the statement, even if it is a soft parse, requires a network round trip from the application to the database, as well as requiring the processing time to locate the previously compiled statement in the cache. Reducing network round trips and unnecessary processing will improve application performance.
If this metric value is below 80% you should look at the Top Sessions page sorted by Hard Parses. This page lists the sessions that are currently performing the most hard parses. Starting with these sessions and the SQL statements they are executing will indicate which applications and corresponding SQL statements are being used inefficiently.
If the metric is currently showing a high value, the expensive hard parses are not occurring but the application can still be tuned by reducing the amount of soft parses. Visit the Top SQL page sorted by Parses to identify the SQL statements that have been most parsed. This will allow you to quickly identify SQL that is being re-parsed unnecessarily. You should investigate these statements first for possible application logic changes such that cursors are opened once, and executed or fetched from many times.
This metric represents the number of sorts going to disk per second for this sample period. For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.
This test checks the number of sorts performed to disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-227 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
The rate of sorts to disk is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-228 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaDiskSorts / Seconds where:
DeltaDiskSorts: difference in 'select value from v$sysstat where name='sorts (disk)'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions sorted by Disk Sorts page.
Further drilldown into the session performing the most disk sorts with the Current SQL page will show you the SQL statement responsible for the disk sorts.
The Top SQL page sorted by Sorts provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing sort of current sessions, it allows you to view sort activity via SQL statements, and will contain cumulative statistics for all executions of that statement.
If excessive sorts are taking place on disk, and the query's are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area will allow the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.
This metric represents the number of sorts going to disk per transactions for this sample period. For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of sorts performed to disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-229 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
The rate of sorts to disk is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-230 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaDiskSorts / Transactions where:
DeltaDiskSorts: difference in 'select value from v$sysstat where name='sorts (disk)'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions page sorted by Disk Sorts.
Further drilldown into the session performing the most disk sorts with the Current SQL page will show you the SQL statement responsible for the disk sorts.
The Top SQL page sorted by Sorts provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing sort of current sessions, it allows you to view sort activity via SQL statements, and will contain cumulative statistics for all executions of that statement.
If excessive sorts are taking place on disk, and the query's are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area will allow the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.
This metric represents the total number of index scans per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-231 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-232 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
index scans kdiixs1 / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the total number of index scans per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-233 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-234 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
index scans kdiixsl / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This number reflects the total number of parses per second, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.
Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.
This test checks the number of parse calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-235 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Total parse rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-236 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaParses / Seconds where:
DeltaParses: difference in 'select value from v$sysstat where name='parse count (total)'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The Top Sessions page sorted by Hard Parses will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page may lead you to the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
This number reflects the total number of parses per transaction, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.
Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.
This test checks the number of parse calls per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-237 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Total parse rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-238 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaParses / Transactions where:
DeltaParses: difference in 'select value from v$sysstat where name='parse count (total)'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The Top Sessions page sorted by Hard Parses will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.
By showing you which sessions are incurring the most hard parses, this page may lead you to the application or programs that are the best candidates for SQL rewrites.
Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.
The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.
To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.
This metric represents the percentage of user calls to recursive calls.
Occasionally, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:
When data dictionary information is not available in the data dictionary cache and must be retrieved from disk.
In the firing of database triggers
In the execution of DDL statements
In the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks
In the enforcement of referential integrity constraints
This test checks the percentage of user calls to recursive calls. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-239 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
%value%%% of calls are user calls. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-240 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
< |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(DeltaUserCalls/(DeltaRecursiveCalls + DeltaUserCalls)) * 100 where:
DeltaRecursiveCalls: difference in 'select value from v$sysstat where name='recursive calls'' between sample end and start
DeltaUserCalls: difference in 'select value from v$sysstat where name='user calls'' between sample end and start
User Action
A low value for this metric means that the Oracle Server is making a large number of recursive calls. If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle8i, considering taking advantage of locally managed tablespaces.
This metric represents the number of logins, parses, or execute calls per second during the sample period.
This test checks the number of logins, parses, or execute calls. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-241 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User call rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-242 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaUserCalls / Seconds where:
DeltaUserCalls: difference in 'select value from v$sysstat where name='user calls'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
This statistic is a reflection of how much activity is going on within the database. Spikes in the total user call rate should be investigated to determine which of the underlying calls is actually increasing. Parse, execute and logon calls each signify different types of user or application actions and should be addressed individually. User Calls is an overall activity level monitor.
This metric represents the number of logins, parses, or execute calls per transaction during the sample period.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of logins, parses, or execute calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-243 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User call rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-244 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaUserCalls / Transactions where:
DeltaUserCalls: difference in 'select value from v$sysstat where name='user calls'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
This statistic is a reflection of how much activity is going on within the database. Spikes in the total user call rate should be investigated to determine which of the underlying calls is actually increasing. Parse, execute and logon calls each signify different types of user or application actions and should be addressed individually. User Calls is an overall activity level monitor.
This metric represents the number of user commits performed, per second during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.
This test checks the number of user commits per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-245 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User commit rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-246 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaCommits / Seconds where:
DeltaCommits: difference in 'select value from v$sysstat where name='user commits'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.
This metric represents the number of user commits performed, per transaction during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the number of user commits per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-247 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User commit rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-248 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaCommits / Transactions where:
DeltaCommits: difference in 'select value from v$sysstat where name='user commits'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.
This metric represents the number of undo records applied to user-requested rollback changes per second.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-249 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-250 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(rollback changes - undo records applied) / time
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of undo records applied to user-requested rollback changes per transaction.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-251 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-252 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
(rollback changes - undo records applied) / transactions
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the number of times, per second during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.
This test checks the number of rollbacks per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-253 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User rollback rate is %value%/sec. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-254 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRollbacks / Seconds where:
DeltaRollbacks: difference in 'select value from v$sysstat where name='user rollbacks'' between end and start of sample period
Seconds: number of seconds in sample period
User Action
This value shows how often users are issuing the ROLLBACK statement or encountering errors in their transactions. Further investigation should be made to determine if the rollbacks are part of some faulty application logic or due to errors occurring through database access.
This metric represents the number of times, per transaction during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.
The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" metric of the same name will be a better indicator of current performance.
This test checks the Number of rollbacks per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-255 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
User rollback rate is %value%/transaction. |
10.2.0.x |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Not Defined |
Table 4-256 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 5 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
2 |
Generated By Database Server |
Data Source
DeltaRollbacks / Transactions where:
DeltaRollbacks: difference in 'select value from v$sysstat where name='user rollbacks'' between end and start of sample period
Transactions: number of transactions in sample period
User Action
This value shows how often users are issuing the ROLLBACK statement or encountering errors in their transactions. Further investigation should be made to determine if the rollbacks are part of some faulty application logic or due to errors occurring through database access.
This metric category contains the metrics used to represent logons to the database by audited users (such as SYS).
This metric monitors specified database user connections. For example, an alert is displayed when a particular database user connection, specified by the User name filter argument, has been detected.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-257 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 15 Minutes |
Not Uploaded |
= |
SYS |
Not Defined |
1 |
User %value% logged on from %machine%. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Username_Machine" object.
If warning or critical threshold values are currently set for any "Username_Machine" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Username_Machine" object, use the Edit Thresholds page.
Data Source
For each metric index:
SELECT username
User Action
User actions may vary depending on the user connection that is detected.
This metric represents the host machine from which the audited user's logon originated.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Data Source
For each metric index:
SELECT machine
User Action
Review the access to the database from this client machine.
This metric represents the number of logons the audited user has from a given machine.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Data Source
For each metric index:
SELECT count(username)
User Action
No user action is necessary.
This metric category contains the metrics that tell to what extent, and how consistently, a given session is blocking multiple other sessions.
This metric signifies that a database user is blocking at least one other user from performing an action, such as updating a table. An alert is generated if the number of consecutive blocking occurrences reaches the specified value.
Note: The catblock.sql script needs to be run on the managed database prior to using the User Blocks test. This script creates some additional tables, view, and public synonyms that are required by the User Blocks test.
Note: Unlike most metrics, which accept thresholds as real numbers, this metric can only accept an integer as a threshold.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-258 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every 5 Minutes |
Not Uploaded |
> |
0 |
Not Defined |
3 |
Session %sid% blocking %value% other sessions. |
Table 4-259 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
0 |
Not Defined |
15 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Blocking Session ID" object.
If warning or critical threshold values are currently set for any "Blocking Session ID" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Blocking Session ID" object, use the Edit Thresholds page.
Data Source
SELECT SUM(num_blocked) FROM (SELECT id1, id2, MAX(DECODE(block, 1, sid, 0)) blocking_sid, SUM(DECODE(request, 0, 0, 1)) num_blocked FROM v$lock WHERE block = 1 OR request>0 GROUP BY id1, id2) GROUP BY blocking SID
User Action
Either have user who is blocking other users rollback the transaction, or wait until the blocking transaction has been committed.
The UDM metric allows you to execute your own SQL statements. The data returned by these SQL statements can be compared against thresholds and generate severity alerts similar to alerts in predefined metrics.
This metric category contains the metrics that approximate the percentage of time spent waiting by user sessions. This approximation takes system-wide totals and discounts the effects of sessions belonging to background processes.
This metric represents the active sessions using CPU.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
8.1.7.4; 9.0.1.x; 9.2.0.x | Every Minute |
This metric represents the active sessions waiting for I/O.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
8.1.7.4; 9.0.1.x; 9.2.0.x | Every Minute |
This metric represents all the waits that are neither idle nor user I/O.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
8.1.7.4; 9.0.1.x; 9.2.0.x | Every Minute |
This metric represents the average instance CPU as a percentage.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
8.1.7.4; 9.0.1.x; 9.2.0.x | Every Minute |
This wait happens when a session wants to access a database block in the buffer cache but it cannot because the buffer is busy. Another session is modifying the block and the contents of the block are in flux during the modification. To guarantee that the reader has a coherent image of the block with either all of the changes or none of the changes, the session modifying the block marks the block header with a flag letting other users know a change is taking place and to wait until the complete change is applied.
The two main cases where this wait can occur are:
Another session is reading the block into the buffer
Another session holds the buffer in an incompatible mode to our request
While the block is being changed, the block is marked as unreadable by others. The changes that are being made should last under a few hundredths of a second. A disk read should be under 20 milliseconds and a block modification should be under one millisecond. Therefore it will take a lot of buffer busy waits to cause a problem.
However, in a problem situation, there is usually a hot block, such as the first block on the free list of a table, with high concurrent inserts. All users will insert into that block at the same time, until it fills up, then users start inserting into the next free block on the list, and so on.
Another example of a problem is of multiple users running full table scans on the same large table at the same time. One user will actually read the block physically off disk, and the other users will wait on Buffer Busy Wait for the physical I/O to complete.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-260 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'buffer busy waits' event. |
Data Source
(DeltaBufferBusyWaitsTime/DeltaServiceTime)*100 where:
DeltaBufferBusyWaitsTime: difference of 'sum of time waited for sessions of foreground processes on the 'buffer busy waits' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Look at v$waitstat (or the buffer busy drill down page) and determine the block type with the highest waits.
Block Type and Action:
Undo Header - Use Automatic Undo Management (AUM) or add more RBS segments)
Undo Block - Use AUM (or increase RBS sizes)
Data Block - First determine if it is an I/O problem. The Buffer Busy Waits drill-down page should provide this information. Otherwise, sample from v$session_wait
SELECT p3, count(*) FROM v$session_wait WHERE event='buffer busy wait' ;
If p3 is less than 200 then it is an I/O problem. Either improve I/O performance or change application. Applications running concurrent batch jobs that do full table scans on the same large tables run into this problem.
Free List - Use ASSM (or freelists groups)
This metric represents the time spent using CPU during the interval, measured in hundredths of a second.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x | Every Minute |
Data Source
The difference of sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start.
User Action
No user action is necessary.
This is the same type of event as "db file sequential read", except that Oracle will read multiple data blocks. Multi-block reads are typically used on full table scans. The name "scattered read" refers to the fact that multiple blocks are read into database block buffers that are 'scattered' throughout memory.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-261 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
5 |
%value%%% of service time is spent waiting on the 'db file scattered read' event. |
Data Source
(DeltaDbFileScatteredReadTime/DeltaServiceTime)*100 where:
DeltaDbFileScatteredReadTime: difference of 'sum of time waited for sessions of foreground processes on the 'db file scattered read' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
If the TIME spent waiting for multiblock reads is significant, then it is helpful to determine against which segments Oracle is performing the reads. The files where the reads are occurring can be found by looking at the V$FILESTAT view where BLKS_READ / READS > 1. (A ratio greater than one indicates there are some multiblock reads occurring).
It is also useful to see which sessions are performing scans and trace them to see if the scans are expected. This statement can be used to see which sessions may be worth tracing:
SELECT sid, total_waits, time_waited FROM v$session_event WHERE event='db file scattered read' and total_waits>0 ORDER BY 3,2 ;
You can also look at:
Statements with high DISK_READS in the V$SQL view
Sessions with high table scans blocks gotten in the V$SESSTAT view
This event shows a wait for a foreground process while doing a sequential read from the database. The I/O is generally issued as a single I/O request to the OS; the wait blocks until the I/O request completes.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-262 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
5 |
%value%%% of service time is spent waiting on the 'db file sequential read' event. |
Data Source
(DeltaDbFileSequentialReadTime/DeltaServiceTime)*100 where:
DeltaDbFileSequentialReadTime: difference of 'sum of time waited for sessions of foreground processes on the 'db file sequential read' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Because I/O is a normal activity, take notice of unnecessary or slow I/O activity. If the TIME spent waiting for I/Os is significant, then it can be determined for which segments Oracle has to go to disk. See the "Tablespace I/O" and "File I/O" sections of the ESTAT or STATSPACK reports to get information on which tablespaces and files are servicing the most I/O requests, and to get an indication of the speed of the I/O subsystem.
If the TIME spent waiting for reads is significant, then determine against which segments Oracle is performing the reads. The files where the reads are occurring can be found by looking at the V$FILESTAT view.
Also, see which sessions are performing reads and trace them to see if the I/Os are expected. You can use this statement to see which sessions are worth tracing:
SELECT sid, total_waits, time_waited FROM v$session_event WHERE event='db file sequential read' and total_waits>0 ORDER BY 3,2 ;
You can also look at:
Statements with high DISK_READS in the V$SQL view
Sessions with high "physical reads" in the V$SESSTAT view
This event is used to wait for the writing of the file headers.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-263 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'db file single write' event. |
Data Source
(DeltaDbFileSingleWriteTime/DeltaServiceTime)*100 where:
DeltaDbFileSingleWriteTime: difference of 'sum of time waited for sessions of foreground processes on the 'db file single write' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
No user action is necessary.
The session is waiting for a direct read to complete. A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.
If asynchronous I/O is supported (and in use), then Oracle can submit I/O requests and continue processing. Oracle can then pick up the results of the I/O request later and wait on "direct path read" until the required I/O completes.
If asynchronous I/O is not being used, then the I/O requests block until completed but these do not show as waits at the time the I/O is issued. The session returns later to pick up the completed I/O data but can then show a wait on "direct path read" even though this wait will return immediately.
Hence this wait event is very misleading because:
The total number of waits does not reflect the number of I/O requests
The total time spent in "direct path read" does not always reflect the true wait time.
This style of read request is typically used for:
Sort I/O (when a sort does not fit in memory)
Parallel Query slaves
Read ahead (where a process may issue an I/O request for a block it expects to need in the near future)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-264 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'direct path read' event. |
Data Source
(DeltaDirectPathReadTime/DeltaServiceTime)*100 where:
DeltaDirectPathReadTime: difference of 'sum of time waited for sessions of foreground processes on the 'direct path read' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
In DSS type systems, or during heavy batch periods, waits on "direct path read" are normal. However, if the waits are significant on an OLTP style system, there may be a problem.
You can:
Examine the V$SESSION_EVENT view to identify sessions with high numbers of waits
Examine the V$SESSTAT view to identify sessions with high "physical reads direct" (statistic only present in newer Oracle releases)
Examine the V$FILESTAT view to see where the I/O is occurring
Examine the V$SQLAREA view for statements with SORTS and high DISK_READS (which may or may not be due to direct reads)
Determine whether the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure that the DISK_ASYNCH_IO parameter is set to TRUE. This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct I/O is not accounted for in wait event timings).
Ensure the OS asynchronous I/O is configured correctly.
Check for I/O heavy sessions and SQL and see if the amount of I/O can be reduced.
Ensure no disks are I/O bound.
The session is waiting for a direct read of a large object (lob) to complete. A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.
If asynchronous I/O is supported (and in use), then Oracle can submit I/O requests and continue processing. Oracle can then pick up the results of the I/O request later and wait on "direct path read" until the required I/O completes.
If asynchronous I/O is not being used, then the I/O requests block until completed but these do not show as waits at the time the I/O is issued. The session returns later to pick up the completed I/O data but can then show a wait on "direct path read" even though this wait will return immediately.
Hence this wait event is very misleading because:
The total number of waits does not reflect the number of I/O requests
The total time spent in "direct path read" does not always reflect the true wait time.
This style of read request is typically used for:
Sort I/O (when a sort does not fit in memory)
Parallel Query slaves
Read ahead (where a process may issue an I/O request for a block it expects to need in the near future)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-265 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'direct path read (lob)' event. |
Data Source
(DeltaDirectPathReadLobTime/DeltaServiceTime)*100 where:
DeltaDirectPathReadLobTime: difference of 'sum of time waited for sessions of foreground processes on the 'direct path read (lob)' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
For noncached lob segments, it is helpful to place the data files where the LOB SEGMENTS reside on a buffered disk, for example, on a File system disk. This placement allows the direct reads to benefit from a cache not on Oracle for data read operations.
Session is waiting for a direct write to complete.
Direct path writes allow a session to queue an I/O write request and continue processing while the OS handles the I/O. If the session needs to know if an outstanding write is complete, then it waits for this wait event. This can happen because the session is either out of free slots and needs an empty buffer (it waits on the oldest I/O) or it needs to ensure all writes are flushed.
If asynchronous I/O is not being used, then the I/O write request blocks until it is completed but this does not show as a wait at the time the I/O is issued. The session returns later to pick up the completed I/O data but can then show a wait on "direct path write" even though this wait will return immediately.
Hence this wait event is misleading because:
The total number of waits does not reflect the number of I/O requests
The total time spent in "direct path write" does not always reflect the true wait time.
This style of read request is typically used for:
Sort I/O (when a sort does not fit in memory)
Parallel DML are issued to create and populate objects
Direct load operations, for example, Create Table as Select (CTAS)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-266 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'direct path write' event. |
Data Source
(DeltaDirectPathWriteTime/DeltaServiceTime)*100 where:
DeltaDirectPathWriteTime: difference of 'sum of time waited for sessions of foreground processes on the 'direct path write' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
It is unusual to see lots of waits on "direct path write" except for specific jobs. If the figure is a large proportion of the overall wait time it is best to identify where the writes are coming from.
You can:
Examine the V$SESSION_EVENT view to identify sessions with high numbers of waits.
Examine the V$SESSTAT view to identify sessions with high "physical writes direct" (statistic only present in newer Oracle releases).
Examine the V$FILESTAT view to see where the I/O is occurring.
Determine whether the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure the DISK_ASYNCH_IO parameter is set to TRUE. This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times because synchronous direct I/O is not accounted for in wait event timings.
Ensure the OS asynchronous I/O is configured correctly.
Ensure no disks are I/O bound.
For parallel DML, check the I/O distribution across disks and make sure that the I/O subsystem is adequately sized for the degree of parallelism.
Direct path write to a large object (LOB). The session is waiting on the operating system to complete the write operation.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-267 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
50 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'direct path write (lob)' event. |
Data Source
(DeltaDirectPathWriteLobTime/DeltaServiceTime)*100 where:
DeltaDirectPathWriteLobTime: difference of 'sum of time waited for sessions of foreground processes on the 'direct path write (lob)' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
It is unusual to see lots of waits on "direct path write (lob)" except for specific jobs. If the figure is a large proportion of the overall wait time it is best to identify where the writes are coming from.
You can:
Examine the V$SESSION_EVENT view to identify sessions with high numbers of waits.
Examine the V$SESSTAT view to identify sessions with high "physical writes direct" (statistic only present in newer Oracle releases).
Examine the V$FILESTAT view to see where the I/O is occurring.
Determine whether the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure the DISK_ASYNCH_IO parameter is set to TRUE. This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times because synchronous direct I/O is not accounted for in wait event timings.
Ensure the OS asynchronous I/O is configured correctly.
Ensure no disks are I/O bound.
For parallel DML, check the I/O distribution across disks and make sure that the I/O subsystem is adequately sized for the degree of parallelism.
Enqueues are local locks that serialize access to various resources. This wait event indicates a wait for a lock that is held by another session (or sessions) in an incompatible mode to the requested mode.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-268 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue' event. |
Data Source
(DeltaEnqueueTime/DeltaServiceTime)*100 where:
DeltaEnqueueTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue' event, or any other 'enqueue:' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
The action to take depends on the lock type which is causing the most problems. The most common lock waits are generally for:
TX: Transaction Lock -- Generally due to application or table setup issues, for example row level locking conflicts and ITL allocation
TM: DML enqueue -- Generally due to application issues, particularly if foreign key constraints have not been indexed.
ST: Space management enqueue -- Usually caused by too much space management occurring (for example, small extent sizes, lots of sorting, and so on)
HW: High Water Mark -- Concurrent users trying to extend a segment's high-water mark for space allocated.
In Oracle9i and earlier releases, all enqueue wait times are included in this alert.
To determine which enqueues are causing the most waits systemwide:
In Oracle9i and later, examine the V$ENQUEUE_STAT view thus:
SELECT eq_type "Lock", total_req# "Gets", total_wait# "Waits", cum_wait_time FROM V$enqueue_stat WHERE Total_wait# > 0 ;
In Oracle8i and earlier, examine the X$KSQST view thus:
SELECT ksqsttyp "Lock", ksqstget "Gets", ksqstwat "Waits" FROM X$KSQST where KSQSTWAT>0 ;
The above give the systemwide number of waits for each lock type. Remember that it only takes one long wait to distort the average wait time figures.
You can also examine:
Sessions with high numbers of "enqueue waits" in the V$SESSTAT view
Sampling of the V$LOCK view to find waiting / blocking sessions
TM Per table locks are acquired during the execution of a transaction when referencing a table with a DML statement so that the object is not dropped or altered during the execution of the transaction, if and only if the dml_locks parameter is non-zero.
TM Locks are held for base table/partition operations under the following conditions:
Enabling of referential constraints
Changing constraints from DIASABLE NOVALIDATE to DISABLE VALIDATE
Rebuild of an IOT
Create View or Alter View operations
Analyze table compute statistics or validate structure
Parallel DML operations
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-269 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: DML - contention' event. |
Data Source
(DeltaEnqueueDMLTime/DeltaServiceTime)*100 where:
DeltaEnqueueDMLTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: DML - contention' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Examine the database locks page and determine the user who is blocking another user and why, then decide the appropriate action.
The HW enqueue is used to serialize the allocation of space above the high-water mark in an object.
This lock is acquired when a segment's high-water mark is moved, which typically is the case during heavy inserts.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-270 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: HW, Segment High Water Mark - contention' event. |
Data Source
(DeltaEnqueueHWTime/DeltaServiceTime)*100 where:
DeltaEnqueueHWTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: Segment High Water Mark - contention' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Use Locally Managed Tablespaces.
For version dictionary managed tablespaces:
Recreate the objects and preallocate extents with the following: ALTER TABLE...ALLOCATE EXTENT statements.
Increasing the number of free lists may help, as well as moving the high-water mark. This depends on the number of freelists.
When Oracle needs to perform a space management operation (such as allocating temporary segments for a sort) the user session acquires a special enqueue called the 'ST' enqueue.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-271 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: ST, Space Transaction - contention' event. |
Data Source
(DeltaEnqueueSTTime/DeltaServiceTime)*100 where:
DeltaEnqueueSTTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: Space Transaction - contention' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Ensure that temporary tablespaces are proper temporary tablespaces of type "temporary".
Two users are attempting to change the same row.
These locks are of type TX.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-272 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: TM,TX, Transaction - row lock contention' event. |
Data Source
(DeltaEnqueueRowLockTime/DeltaServiceTime)*100 where:
DeltaEnqueueRowLockTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: Transaction - row lock contention' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Examine the database locks page and determine the user who is blocking another user and why, then decide the appropriate action.
Oracle keeps note of which rows are locked by which transaction in an area at the top of each data block known as the 'interested transaction list'. The number of ITL slots in any block in an object is controlled by the INITRANS and MAXTRANS attributes. INITRANS is the number of slots initially created in a block when it is first used, while MAXTRANS places an upper bound on the number of entries allowed. Each transaction which wants to modify a block requires a slot in this 'ITL' list in the block.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-273 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: TX mode 4, Transaction - allocate ITL entry' event. |
Data Source
(DeltaEnqueueAllocITLTime/DeltaServiceTime)*100 where:
DeltaEnqueueAllocITLTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: TX mode 4, Transaction - allocate ITL entry' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
To increase the number of ITL slots, recreate the table and increase the INITRANS parameter for the object with the contention. An alter table statement can be run to increase the ITL slots by increasing the value for INITRANS, but this will only take effect for new blocks.
Caused by the application explicitly running commands of the nature "lock table".
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-274 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'enqueue: UL: User-defined - contention' event. |
Data Source
(DeltaEnqueueUserDefTime/DeltaServiceTime)*100 where:
DeltaEnqueueUserDefTime: difference of 'sum of time waited for sessions of foreground processes on the 'enqueue: User-defined - contention' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This is an application issue. Determine where the application code is locking objects and why. Make relevant application changes if necessary.
Use the �Blocking Sessions� page to find lock holds and waits.
This event occurs mainly when a server process is trying to read a new buffer into the buffer cache but too many buffers are either pinned or dirty and thus unavailable for reuse. The session posts to DBWR then waits for DBWR to create free buffers by writing out dirty buffers to disk.
DBWR may not be keeping up with writing dirty buffers in the following situations:
The I/O system is slow.
There are resources it is waiting for, such as latches.
The buffer cache is so small that DBWR spends most of it's time cleaning out buffers for server processes.
The buffer cache is so big that one DBWR process is not enough to free enough buffers in the cache to satisfy requests.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-275 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'free buffer waits' event. |
Data Source
(DeltaFreeBufferWaitsTime/DeltaServiceTime)*100 where:
DeltaFreeBufferWaitsTime: difference of 'sum of time waited for sessions of foreground processes on the 'free buffer waits' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Sometimes the easy solution is to increase the buffer cache to allow for more free blocks. This works in many cases, but if the application is generating a sustained amount of dirty blocks then increasing the buffer cache may only help or delay the problem but not solve it.
If this event occurs frequently, examine the session waits for DBWR to see whether there is anything delaying DBWR.
Run this query to see if the I/O is evenly distributed.
SELECT name, phyrds, phywrts FROM v$filestat a, v$datafile b WHERE a.file# = b.file#
Also look for files having full table scans, using this query:
SELECT name, phyrds, phyblkrd, phywrts FROM v$filestat a, v$datafile b WHERE a.file# = b.file# AND phyrds != phyblkrd
This metric represents the percentage of CPU being used on the host.
Metric Summary
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
10.1.0.x | Every 15 Minutes |
A latch is a low-level internal lock used by Oracle to protect memory structures. Latches are similar to short duration locks that protect critical bits of code. This wait indicates that the process is waiting for a latch that is currently busy (held by another process).
The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-276 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'latch free' event. |
Data Source
(DeltaLatchFreeTime/DeltaServiceTime)*100 where:
DeltaLatchFreeTime: difference of 'sum of time waited for sessions of foreground processes on the 'latch free' event, or any other 'latch:' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Determine which latch is causing the highest amount of contention.
To find the problem latches since database startup, run the following query:
SELECT n.name, l.sleeps FROM v$latch l, v$latchname n WHERE n.latch#=l.latch# and l.sleeps > 0 order by l.sleeps ;
To see latches that are currently a problem on the database run:
SELECT n.name, SUM(w.p3) Sleeps FROM V$SESSION_WAIT w, V$LATCHNAME n WHERE w.event = `latch free' AND w.p2 = n.latch# GROUP BY n.name;
Take action based on the latch with the highest number of sleeps.
The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching for, adding, or removing a buffer from the buffer cache.
Blocks in the buffer cache are placed on linked lists (cache buffer chains) which hang off a hash table. The hash chain that a block is placed on is based on the DBA and CLASS of the block. Each hash chain is protected by a single child latch. Processes need to get the relevant latch to allow them to scan a hash chain for a buffer so that the linked list does not change underneath them.
Contention on this latch usually means that there is a block that is in great contention (known as a hot block).
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-277 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'latch: cache buffer chains' event. |
Data Source
(DeltaLatchCacheBufferChainsTime/DeltaServiceTime)*100 where:
DeltaLatchCacheBufferChainsTime: difference of 'sum of time waited for sessions of foreground processes on the 'latch: cache buffer chains' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers chains latches using the V$LATCH_CHILDREN view. If there is a specific cache buffers chains child latch that has many more GETS, MISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.
This latch has a memory address, identified by the ADDR column.
SELECT addr, sleeps FROM v$latch_children c, v$latchname n WHERE n.name='cache buffers chains' and c.latch#=n.latch# and sleeps > 100 ORDER BY sleeps /
Use the value in the ADDR column joined with the V$BH view to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily contended latch, this queries the file and block numbers:
SELECT file#, dbablk, class, state, TCH FROM X$BH WHERE HLADDR='address of latch';
X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.
Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is a potential hot block. Perform this query a number of times, and identify the block that consistently appears in the output.
After you have identified the hot block, query DBA_EXTENTS using the file number and block number to identify the segment.
There are multiple library cache latches. Each one protects a range of 'hash buckets' and the latch covers all heaps.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-278 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'latch: library cache' event. |
Data Source
(DeltaLatchLibraryCacheTime/DeltaServiceTime)*100 where:
DeltaLatchLibraryCacheTime: difference of 'sum of time waited for sessions of foreground processes on the 'latch: library cache' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Contention for the library cache latches can be caused by excessive parsing of literal SQL. It is advisable to use sharable SQL wherever possible.
When a sessions redo buffer is larger than Parameter: log_small_entry_max_size the kernel first allocates a redo copy buffer, protected by a redo copy latch.
The buffer will not be used until space is allocated on the log buffer and some header has been set. However, the redo copy latch is acquired to reduce the code inside the allocation latch holding and to prevent further contention.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-279 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'latch: redo copy' event. |
Data Source
(DeltaLatchRedoCopyTime/DeltaServiceTime)*100 where:
DeltaLatchRedoCopyTime: difference of 'sum of time waited for sessions of foreground processes on the 'latch: redo copy' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
The number of redo copy latches is controlled by the init.ora Parameter:log_simultaneous_copies. If the parameter is not set, it defaults to the number of CPUs.
For log generating processes, the latch get is made in an immediate mode, then it will be convenient to have enough redo copy latches to reduce contention of foreground processes.
Before flushing out the log buffer, the LGWR will acquire all redo copy latches in a willing-to-wait mode. Thus an excessive number of copy latches will cause contention in the log buffer flushing process.
The number of LWGR redo copy latch allocations is redo writes * No.redo copy latches.
This latch protects the allocation of memory from the shared pool.
If there is contention on this latch, it is often an indication that the shared pool is fragmented.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-280 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'latch: shared pool' event. |
Data Source
(DeltaLatchSharedPoolTime/DeltaServiceTime)*100 where:
DeltaLatchSharedPoolTime: difference of 'sum of time waited for sessions of foreground processes on the 'latch: shared pool' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Shared pool latch contention is often an indication of high hard parsing usually caused by the use of literal values in SQL statements. These statements could otherwise be shared if bind variables were used.
Prior to Oracle Server release 8.1.6, shared pool fragmentation could be acerbated by a shared pool that was too large. Reducing the size of the shared pool would reduce the contention for this latch.
For Oracle Server release 8.1.6 and later, there should be very little shared pool latch contention. If there is, it is probably a symptom of an application using literals. One possible solution is to use the init.ora parametercursor_sharing=FORCE.
Oracle tries to find the load lock for the database object so that it can load the object. The load lock is always gotten in Exclusive mode, so that no other process can load the same object. If the load lock is busy the session will wait on this event until the lock becomes available.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-281 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'library cache load lock' event. |
Data Source
(DeltaLibraryCacheLoadLockTime/DeltaServiceTime)*100 where:
DeltaLibraryCacheLoadLockTime: difference of 'sum of time waited for sessions of foreground processes on the 'library cache load lock' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
To be waiting for a load lock means that there is a blocker with a higher or incompatible mode. This event in itself is not affected by the parallel server. However, you must have acquired the 'library cache lock' before you get to this point. The 'cache lock' is a DFS lock.
The library cache lock controls the concurrency between clients of the library cache by acquiring a lock on the object handle so that one client can prevent other clients from accessing the same object or the client can maintain a dependency for a long time (no other client can change the object). This lock is also gotten to locate an object in the library cache.
Blocking situations can occur when two sessions compile the same PL/SQL package, or one session is recreating an index while another session is trying to execute a SQL statement that depends on that index.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-282 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'library cache lock' event. |
Data Source
(DeltaLibraryCacheLockTime/DeltaServiceTime)*100 where:
DeltaLibraryCacheLockTime: difference of 'sum of time waited for sessions of foreground processes on the 'library cache lock' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Waiting for a load lock indicates that there is a blocker with a higher or incompatible mode. Locks map to Instance Locks.
The following query will list waiters and the holder of the resource along with the event the resource holder is waiting for.
column h_wait format A20 SELECT s.sid, waiter.p1raw w_p1r, waiter.p2raw w_p2r, holder.event h_wait, holder.p1raw h_p1r, holder.p2raw h_p2r, count(s.sid) users_blocked, sql.hash_value FROM v$sql sql, v$session s, x$kgllk l, v$session_wait waiter, v$session_wait holder WHERE s.sql_hash_value = sql.hash_value and l.KGLLKADR=waiter.p2raw and s.saddr=l.kgllkuse and waiter.event like 'library cache lock' and holder.sid=s.sid GROUP BY s.sid, waiter.p1raw , waiter.p2raw , holder.event , holder.p1raw , holder.p2raw , s ql.hash_value ;
Library cache pins are used to manage library cache concurrency. Pinning an object causes the heaps to be loaded into memory (if not already loaded). PINS can be acquired in NULL, SHARE or EXCLUSIVE modes and can be considered like a special form of lock. A wait for a "library cache pin" implies some other session holds that PIN in an incompatible mode.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-283 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'library cache pin' event. |
Data Source
(DeltaLibraryCachePinTime/DeltaServiceTime)*100 where:
DeltaLibraryCachePinTime: difference of 'sum of time waited for sessions of foreground processes on the 'library cache pin' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
What to do to reduce these waits depends heavily on what blocking scenario is occurring. A common problem scenario is the use of DYNAMIC SQL from within a PL/SQL procedure where the PL/SQL code is recompiled and the DYNAMIC SQL calls something which depends on the calling procedure.
If there is general widespread waiting then the shared pool may need tuning.
If there is a blocking scenario, collect evidence as described in the following query and contact Oracle support.
The following query will list the waiters and the session holding the pin, along with the wait event the holder is waiting for.
column h_wait format A20 SELECT s.sid, waiter.p1raw w_p1r, holder.event h_wait, holder.p1raw h_p1r, holder.p2raw h_p2r, holder.p3raw h_p2r, count(s.sid) users_blocked, sql.hash_value FROM v$sql sql, v$session s, x$kglpn p, v$session_wait waiter, v$session_wait holder WHERE s.sql_hash_value = sql.hash_value and p.kglpnhdl=waiter.p1raw and s.saddr=p.kglpnuse and waiter.event like 'library cache pin' and holder.sid=s.sid GROUP BY s.sid, waiter.p1raw , holder.event , holder.p1raw , holder.p2raw , holder.p3raw , sql.hash_value ;
The wait event can be caused by truncate operations. Truncate operations cause the DBWR to be posted to flush out the space header.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-284 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'local write wait' event. |
Data Source
(DeltaLocalWriteWaitTime/DeltaServiceTime)*100 where:
DeltaLocalWriteWaitTime: difference of 'sum of time waited for sessions of foreground processes on the 'local write wait' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
Wait time: Up to one second, then loop back and check that buffer is clean.
Parameters:
P1: Absolute file number
P2: Block number
See the Idle Events section in this chapter.
User Action
No user action is necessary.
The system is waiting for space in the log buffer because data is being written into the log buffer faster than LGWR can write it out.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-285 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'log buffer space' event. |
Data Source
(DeltaLogBufferSpaceTime/DeltaServiceTime)*100 where:
DeltaLogBufferSpaceTime: difference of 'sum of time waited for sessions of foreground processes on the 'log buffer space' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Consider making the log buffer bigger if it is small, or moving the log files to faster disks such as striped disks.
The system is waiting for a log switch because the log being switched into has not been archived yet.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-286 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
1 |
5 |
1 |
%value%%% of service time is spent waiting on the 'log file switch (archiving needed)' event. |
Data Source
(DeltaLogFileSwitchArchTime/DeltaServiceTime)*100 where:
DeltaLogFileSwitchArchTime: difference of 'sum of time waited for sessions of foreground processes on the 'log file switch (archiving needed)' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Check the alert file to make sure that archiving has not stopped due to a failed archive write. To speed up archiving consider adding more archive processes or putting the archive files on striped disks.
If the archiver is slow, then it might be prudent to prevent I/O contention between the archiver process and LGWR by ensuring that archiver reads and LGWR writes are separated. This is achieved by placing logs on alternating drives.
Waiting for a log switch because the system cannot wrap into the next log because the checkpoint for that log has not completed.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-287 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
5 |
50 |
1 |
%value%%% of service time is spent waiting on the 'log file switch (checkpoint complete) event. |
Data Source
(DeltaLogFileSwitchCkptTime/DeltaServiceTime)*100 where:
DeltaLogFileSwitchCkptTime: difference of 'sum of time waited for sessions of foreground processes on the 'log file switch (checkpoint complete)' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Increase the redo log sizes.
To speed up checkpoint, consider making the buffer cache smaller, or increasing Parameter:DB_BLOCK_CHECKPOINT_BATCH, or adding more DBWR processes. You can also enable the checkpoint process by setting the init.ora Parameter:CHECKPOINT_PROCESS = TRUE.
Waiting for log switch because current log is full and LGWR needs to complete writing to current log and open the new log or some other request to switch log files.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-288 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'log file switch completion' event. |
Data Source
(DeltaLogFileSwitchCompleteTime/DeltaServiceTime)*100 where:
DeltaLogFileSwitchCompleteTime: difference of 'sum of time waited for sessions of foreground processes on the 'log file switch completion' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
For the log file switch (checkpoint incomplete) event:
Check if there are too few, or too small redo logs. If there are a few redo logs or small redo logs, and the system produces enough redo to cycle through all the logs before DBWR has been able to complete the checkpoint, then increase the size or number of redo logs. This is often the easiest solution but may increase time to recovery.
Check if DBWR is slow, possibly due to an overloaded or slow I/O system. Check the DBWR write times, check the I/O system, and distribute I/O if necessary.
When a user session COMMITs (or rolls back), the sessions redo information needs to be flushed to the redo log file. The user session will post the LGWR to write all redo required from the log buffer to the redo log file. When the LGWR has finished it will post the user session. The user session waits on this wait event while waiting for LGWR to post it back to confirm all redo changes are safely on disk.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-289 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
30 |
Not Defined |
5 |
%value%%% of service time is spent waiting on the 'log file sync' event. |
Data Source
(DeltaLogFileSyncTime/DeltaServiceTime)*100 where:
DeltaLogFileSyncTime: difference of 'sum of time waited for sessions of foreground processes on the 'log file sync' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
There are 3 main things you can do to help reduce waits on "log file sync":
Tune LGWR to get good throughput to disk.
Do not put redo logs on RAID 5.
Place log files on dedicated disks.
Consider putting log files on striped disks.
If there are lots of short duration transactions, see if it is possible to BATCH transactions together so there are fewer distinct COMMIT operations. Each commit has to have it confirmed that the relevant REDO is on disk. Although commits can be piggybacked by Oracle, reducing the overall number of commits by batching transactions can have a very beneficial effect.
Determine whether any activity can safely be done with NOLOGGING / UNRECOVERABLE options.
Used as part of the 'alter system archive log change scn' command. Oracle is basically waiting for the current log from an open thread other than our own to be archived.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-290 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
5 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'log switch/archive' event. |
Data Source
(DeltaLogSwitchArchTime/DeltaServiceTime)*100 where:
DeltaLogSwitchArchTime: difference of 'sum of time waited for sessions of foreground processes on the 'log switch/archive' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
No user action is necessary.
The session is waiting for the pipe send timer to expire or for space to be made available in the pipe.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-291 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'pipe put' event. |
Data Source
(DeltaPipePutTime/DeltaServiceTime)*100 where:
DeltaPipePutTime: difference of 'sum of time waited for sessions of foreground processes on the 'pipe put' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
You are dependent on space being freed up on the pipe, so you are not actually dependent on any one session. You can query X$KGLOB to find the pipe name. There is virtually no way of finding the pipe name other than via SQL, as there are no useful addresses.
This metric is used to wait for a lock on a data dictionary cache specified by "cache id". If one is running in shared mode (Parallel Server), the LCK0 is signaled to get the row cache lock for the foreground waiting on this event. The LCK0 process will get the lock asynchronously. In exclusive mode, the foreground process will try to get the lock.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-292 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'row cache lock' event. |
Data Source
(DeltaRowCacheLockTime/DeltaServiceTime)*100 where:
DeltaRowCacheLockTime: difference of 'sum of time waited for sessions of foreground processes on the 'row cache lock' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
If this event shows up a lot, consider increasing the shared pool so that more data dictionary can be cached.
The server is sending a break or reset message to the client. The session running on the server is waiting for a reply from the client.
These waits are caused by an application attempting to:
Select from a closed cursor
Select on a cursor after the last row has already been fetched and no data has been returned
Select on a non-existent table
Insert a duplicate row into a uniquely indexed table
Issuing a query with invalid syntax
If the value, v$session_wait.p2, for this parameter equals 0, it means a reset was sent to the client. A non-zero value means that the break was sent to the client.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-293 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net break/reset to client' event. |
Data Source
(DeltaNetResetToClientTime/DeltaServiceTime)*100 where:
DeltaNetResetToClientTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net break/reset to client' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
Wait time: Up to one second, then loop back and check that buffer is clean.
Parameters:
P1: Absolute file number
P2: Block number
See the Idle Events section in this chapter.
User Action
If these waits are significant, track down the application logic producing these errors to reduce these waits. If you are using Oracle9i or higher, check in v$sysstat "parse count (failures)" to see that statements have been parsed where columns or tables are unknown. The statistic "parse count (failures)" does not increase if you send SQL with invalid syntax.
The clearest method to track down the root cause of the error is to run tracing on the users experiencing the wait. Their trace files will contain the SQL statements failing and generating the break/reset wait.
The server is sending a break or reset message to the client. The session running on the server is waiting for a reply from the client.
These waits are caused by an application attempting to:
Select from a closed cursor
Select on a cursor after the last row has already been fetched and no data has been returned
Select on a non-existent table
Insert a duplicate row into a uniquely indexed table
Issuing a query with invalid syntax
If the value, v$session_wait.p2, for this parameter equals 0, it means a reset was sent to the client. A non-zero value means that the break was sent to the client.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-294 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net break/reset to dblink' event. |
Data Source
(DeltaNetResetToDblinkTime/DeltaServiceTime)*100 where:
DeltaNetResetToDblinkTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net break/reset to dblink' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
If these waits are significant, track down the application logic producing these errors to reduce these waits. If you are using Oracle9i or higher, check in v$sysstat "parse count (failures)" to see that statements have been parsed where columns or tables are unknown. The statistic "parse count (failures)" does not increase if you send SQL with invalid syntax.
The shadow process is waiting for confirmation of a send to the client process.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-295 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net message to client' event. |
Data Source
(DeltaNetMsgToClientTime/DeltaServiceTime)*100 where:
DeltaNetMsgToClientTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net message to client' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate network latency problems.
The shadow process is waiting for confirmation of a send to the client process.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-296 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net message to dblink' event. |
Data Source
(DeltaNetMsgToDblinkTime/DeltaServiceTime)*100 where:
DeltaNetMsgToDblinkTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net message to dblink' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate network latency problems.
The shadow process has received part of a call from the client process (for example, SQL*Plus, Pro*C, and JDBC) in the first network package and is waiting for more data for the call to be complete. Examples are large SQL or PL/SQL block and insert statements with large amounts of data.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-297 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net more data from client' event. |
Data Source
(DeltaNetMoreFromClientTime/DeltaServiceTime)*100 where:
DeltaNetMoreFromClientTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net more data from client' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate:
Network latency problems
tcp_no_delay configuration issues
Large array insert
Soft parsing, shipping SQL and PL/SQL text. Using stored procedures and packages will help alleviate this problem.
The shadow process has received part of a call from the client process (for example, SQL*Plus, Pro*C, and JDBC) in the first network package and is waiting for more data for the call to be complete. Examples are large SQL or PL/SQL block and insert statements with large amounts of data.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-298 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net more data from dblink' event. |
Data Source
(DeltaNetMoreFromDblinkTime/DeltaServiceTime)*100 where:
DeltaNetMoreFromDblinkTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net more data from dblink' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate:
Network latency problems
tcp_no_delay configuration issues
Large array insert
Large number of columns or wide column data
The shadow process has completed a database call and is returning data to the client process (for example SQL*Plus). The amount of data being sent requires more than one send to the client. The shadow process waits for the client to receive the last send. This happens, for example, in a SQL statement that returns a large amount of data.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-299 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net more data to client' event. |
Data Source
(DeltaNetMoreToClientTime/DeltaServiceTime)*100 where:
DeltaNetMoreToClientTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net more data to client' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate:
Network latency problems
tcp_no_delay configuration issues
Large array insert
Large number of columns or wide column data
The shadow process has completed a database call and is returning data to the client process (for example SQL*Plus). The amount of data being sent requires more than one send to the client. The shadow process waits for the client to receive the last send. This happens, for example, in a SQL statement that returns a large amount of data.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-300 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'SQL*Net more data to dblink' event. |
Data Source
(DeltaNetMoreToDblinkTime/DeltaServiceTime)*100 where:
DeltaNetMoreToDblinkTime: difference of 'sum of time waited for sessions of foreground processes on the 'SQL*Net more data to dblink' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
This event could indicate:
Network latency problems
tcp_no_delay configuration issues
Large array insert
Large number of columns or wide column data
This metric represents the percentage of time spent waiting, instance-wide, for resources or objects during this sample period.
This test checks the percentage time spent waiting, instance-wide, for resources or objects during this sample period. If the % Wait Time is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-301 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
%value%%% of database service time is spent waiting. |
Table 4-302 Metric Summary Table
Target Version | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
Generated By Database Server |
Data Source
DeltaTotalWait / (DeltaTotalWait + DeltaCpuTime) where:
DeltaTotalWait: difference of 'sum of time waited for all wait events in v$system_event' between sample end and start
DeltaCpuTime: difference of 'select value from v$sysstat where name='CPU used by this session' between sample end and start
User Action
Investigate further into which specific wait events are responsible for the bulk of the wait time. Individual wait events may identify unique problems within the database. Diagnosis will be tailored where appropriate through drilldowns specific to individual wait events.
The session is waiting for a buffer to be written. The write is caused by normal aging or a cross instance call.
A user wants to modify a block that is part of DBWR�s current write batch. When DBWR grabs buffers to write, it marks them as 'being written'. All the collected buffers are then written to disk. The wait 'write complete waits' implies we wanted a buffer while this flag was set. The flags are cleared as each buffer is written.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-303 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
20 |
Not Defined |
3 |
%value%%% of service time is spent waiting on the 'write complete waits' event. |
Data Source
(DeltaWriteCompleteWaitsTime/DeltaServiceTime)*100 where:
DeltaWriteCompleteWaitsTime: difference of 'sum of time waited for sessions of foreground processes on the 'write complete waits' event' between sample end and start
DeltaServiceTime: difference of 'sum of time waited for sessions of foreground processes on events not in IdleEvents + sum of 'CPU used when call started' for sessions of foreground processes' between sample end and start
See the Idle Events section in this chapter.
User Action
Multiple DBWRs, ASYNC_IO and/or increasing the size of the buffer cache may help reduce waits.
This metric category contains the metrics that represent the number of sessions waiting on each non-idle wait event. High waiting levels are caused by excessive contention.
This metric represents the number of sessions waiting on a given wait event at the sample time.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-304 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
8.1.7.4; 9.0.1.x; 9.2.0.x |
Every Minute |
After Every Sample |
> |
Not Defined |
Not Defined |
3 |
%value% sessions are waiting for event %event%. |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Wait Event" object.
If warning or critical threshold values are currently set for any "Wait Event" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Wait Event" object, use the Edit Thresholds page.
Data Source
For each metric index:
select count (1)
User Action
Evaluate the various types of wait activity using the real-time and historical performance monitoring capabilities of Enterprise Manager.
This metric category contains the waits by wait class metrics.
This metric represents the average number of users that have made a call to the database and that are waiting for an event, such as an I/O or a lock request, to complete. If the number of users waiting on events increases, it indicates that either more users are running, increasing workload, or that waits are taking longer, for example when maximum I/O capacity is reached and I/O times increase.
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-305 Metric Summary Table
Target Version | Key | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|
10.1.0.x |
class: "Administrative" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Application" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Cluster" |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Commit" |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Concurrency" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Configuration" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Network" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Other" |
Every 15 Minutes |
After Every Sample |
> |
10 |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "Scheduler" |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "System I/O" |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Not Defined |
10.1.0.x |
class: "User I/O" |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Not Defined |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Wait Class" object.
If warning or critical threshold values are currently set for any "Wait Class" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Wait Class" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page.
This metric represents the percentage of time that database calls spent waiting for an event. Although there is no �correct� value for this metric, it can be used to detect a change in the operation of a system, for example, an increase in Database Time Spent Waiting from 50% to 75%. ('No correct value' means that there is no single value that can be applied to any database. The value is a characteristic of the system and the applications running on the system.)
Metric Summary
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 4-306 Metric Summary Table
Target Version | Key | Server Evaluation Frequency | Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|---|---|
10.1.0.x |
class: "Administrative" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Application" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Cluster" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
50 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Commit" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
50 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Concurrency" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Configuration" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Network" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Other" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
30 |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "Scheduler" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "System I/O" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
10.1.0.x |
class: "User I/O" |
Every Minute |
Every 15 Minutes |
After Every Sample |
> |
Not Defined |
Not Defined |
1 |
Generated By Database Server |
Multiple Thresholds
For this metric you can set different warning and critical threshold values for each "Wait Class" object.
If warning or critical threshold values are currently set for any "Wait Class" object, those thresholds can be viewed on the Metric Detail page for this metric.
To specify or change warning or critical threshold values for each "Wait Class" object, use the Edit Thresholds page.
User Action
View the latest Automatic Database Diagnostic Monitor (ADDM) report. For a more detailed analysis, run ADDM from the Advisor Central link on the Database Home page. ADDM will highlight the source of increased time spent in wait events.