Taking stock of the top 10 performance impacts from new features in Oracle 11g
Any new version of Oracle always brings a lot of attention to the new features, but often at the beginning of the introduction of these new features, the first thing that causes some trouble, because of the lack of understanding of the new technology, because of the old environment is not adapted to the Oracle product to technical services operations and maintenance, always have to go through a long process of integration.
Please note: We don't recommend blindly shutting down and discardingOracle The new features of, We recommend that when you have a problem, Make the adjustments that are right for you。
Here's an inventory of the new annoyances that come with the new features in Oracle 11g, which can be used as a reference if you're preparing for or just stepping into this new release.
1
Adaptive direct path read - Adaptive direct path read
There is a new feature in Oracle Database 11g where full table scans can be performed by Direct Path Read, which is a logical change to avoid the impact of large amounts of data on the Buffer Cache if the large amount of data read for a full table scan is episodic.
However, the reality is often harsh: in many business systems, full table scans are the prevailing norm, and converting all full table scans of large tables into Direct Path Reads is rather less efficient than Cache in Buffer Cache, and Direct Path Reads become a serious burden instead.
Of course for small tables, Oracle allows a full table scan via Buffer Cache, as this is likely to be faster and has little impact on performance. Small tables are affected by the implicit parameter: _small_table_threshold. If the table is larger than the 5x small table limit, DPR will automatically be used instead of FTS.
If negative effects of this characteristic are encountered, Initialization parameters can be set: _serial_direct_read to disable serial direct path reads, Its default value isAUTO, Set toNEVER disable when 11g automaticdirect path read particularities。 This parameter can be dynamically modified at the instance or session level, without having to restart the instance( It can be combined withEvent 10949 set up)。
SQL> alter system set "_serial_direct_read"=auto; SQL> alter system set "_serial_direct_read"=never;
The following AWR information is typical of DPR symptoms, and we see that Direct Path Read is in the part of this report that takes up the most DB Time.
If combined withASH Reports are more visible at a glance, Displays a full table scan of theSQL, All in the name ofDirect Path Read Implementation of the modalities of the Table Access Full:
2
Adaptive Log File Sync - Adaptive Log File Sync
Optimization of Log File Sync waits has been a common problem in Oracle databases, and the waits can be very prominent once LOG FILE write performance fluctuates.
In Oracle version 11.2.0.3, Oracle set the initial value of the implicit parameter _use_adaptive_log_file_sync to TRUE, which resulted in many Log File Sync wait exceptions, a problem that has been around for a long time but is still unknown to many Oracle users.
Foreground processes submit transactions(commit) back,LGWR Need to perform a log write out operation, And the foreground process thus enters Log File Sync Waiting period。
In previous versions, LGWR notifies the foreground process when a write operation is completed, which is also known as Post/Wait mode; in 11gR2, to optimize this process, after the foreground process notifies LGWR of a write, it can query the progress of the write by fetching it at regular intervals, which is known as Poll's mode, and in 11.2.0.3, this feature is enabled by default. The meaning of this parameter is that the database can adaptively select and switch between post/wait and polling modes.
The _use_adaptive_log_file_sync parameter is interpreted as: Adaptively switch between post/wait and polling , and it is this that brings up a lot of bugs and instead makes Log File Sync wait unusually high, and if you observe such a phenom in version 11.2.0.3, it is most likely related to this.
When encountering problems, it is common to set the _use_adaptive_log_file_sync parameter to False and return to the previous mode, which will help to resolve the problem.
3
Adaptive Cursor Sharing - Adaptive Cursor Sharing
Oracle Database SQL uses a shared mechanism that allows Oracle DB to share a single cursor for multiple SQL statements by binding variables to reduce the shared memory and CPU resources used to analyze the SQL statements, etc.
However an execution plan does not always work for all binding values, In order to generate as accurate an execution plan as possible,Oracle Database 11g Introduced new feature of adaptive cursor sharing, In the implementation of the sharedSQL Consider more factors when, If compared to resource overhead, The benefits of using multiple execution plans are more important, then it will add a new entry for each entry that uses the bound variableSQL Statements generate multiple execution plans。
Adaptive Cursor Sharing allows cursors to be shared intelligently only for statements that use bound variables through adaptive cursor sharing. nevertheless Sometimes this feature can make a definite execution plan unstable , if you determine that additional adaptive analysis and change execution plans are not required in the system. or may be affected by unstable execution plans. Then it may be necessary to adjust the use of this feature.
To turn off this feature, set the implicit parameter.
SQL> alter session set"_optimizer_extended_cursor_sharing_rel"=none; SQL> alter session set"_optimizer_extended_cursor_sharing"=none; SQL> alter session set"_optimizer_adaptive_cursor_sharing"=false;
4
Oracle 11g Password Deferred Authentication
In Oracle 11g, Oracle has introduced a new feature called "Password Deferred Authentication" to improve security. The effect of this feature is that if a user enters an incorrect password to try to log in, the time to verify before each login increases as the number of login errors increases, as a way to slow down potentially repetitive password attempt attacks on the database.
However, for normal systems, it is very common that due to password changes, there may be some missed clients that keep trying over and over again, thus causing long Library Cache Lock waits inside the database.
If this type of problem is encountered, this feature can be turned off with Event 28401, thus eliminating such effects, and the following command sets the change in the parameter file.
ALTER SYSTEM SET EVENT = '28401 TRACE NAME CONTEXT FOREVER, LEVEL 1' SCOPE = SPFILE;
AWR reports where this type of problem very typically occurs are presented as follows, first in TOP 5 you may see significant Library Cache Lock waits, the following example is from the real world in version 11.2.0.3.0.
In such cases, the following indicators are displayed in the Time Model - Time Model, where the connection management call elapsed time dominates the DB Time and this wait directly indicates that it was generated when the database connection was established.
This type of issue, (located) atOracle of11g is common and established in, (located) atMOS The corresponding record can be found on:High 'library cache lock' Wait Time Due to Invalid Login Attempts(1309738.1) In addition, Oracle 11g has turned on password case verification, if you upgrade from Oracle 10g, you need to pay special attention to this change, through the initialization parameter SEC_CASE_SENSITIVE_LOGON can control this feature.
5
_datafile_write_errors_crash_instance - file writing errors
Starting with Oracle version 11.2.0.2, a new implicit parameter - _datafile_write_errors_crash_instance was introduced to the database, the meaning of which can be understood by the parameter name: Crash database instance when a datafile write error occurs.
Why introduce this parameter? What problem does this parameter backend solve?
I'm in《 Data Security Alert Book》 There have been multiple cases written about in a book, In archival mode when a file occurs( in-SYSTEM documents) When writing errors,Oracle will automatically take the data file offline, This has caused a lot of havoc, A similar error log might look like this:
Fri Jan 13 19:32:21 2013 KCF: write/open error block=0xf1fa6 online=1 file=73 /dev/rods_gm05 error=27063 txt: 'IBM AIX RISC System/6000 Error: 22: Invalid argument Additional information: -1 Additional information: 557056' Automatic datafile offline due to write error on file 73: /dev/rods_gm05
In light of the dilemma many users were experiencing, Oracle made a fix, which was filed as a bug on MOS as Bug 7691270 Crash the DB in case of write errors (rather than just offline files).
Prior to 11.2.0.2, if the database was running in archive mode and the write error occurred in a non-SYSTEM tablespace file, the database would take the file where the error occurred offline, and in starting with 11.2.0.2, the database would Crash the instance in place of Offline. Note: In non-archived mode or if SYSTEM suffers an error, the database will simply crash.
Well, the answer is now clear: To address the risk of uncertainty in the loss of datafiles, offline control, Oracle introduced the _datafile_write_errors_crash_instance to control the database instance from crashing directly.
If you cannot accept this option, then set the parameter _datafile_write_errors_crash_instance to False.
6
_optimizer_use_feedback - Base feedback for optimizers
Cardinality Feedback - Cardinality Feedback, a new feature introduced in Oracle 11.2, is a new feature that uses information collected during SQL execution to dynamically adjust the execution plan to address situations such as stale statistics, no histogram, or inaccurate histogram-based cardinality calculations.
Oracle wants to improve the accuracy of the execution plan, but in some cases, we may encounter situations where SQL performs best the first time it is executed and then performs worse when it is run again.
The initialization parameter _optimizer_use_feedback controls the enabling of this feature; setting it to False turns it off: the
alter system set “_optimizer_use_feedback”=false;
7
deferred_segment_creation - deferred segment creation
In Oracle 11.2, when we create an empty table or an empty partition, Oracle does not immediately allocate the initial segment and space in order to speed up the creation, and the actual table segment Table Segement is delayed until the first row of data is inserted.
This feature is enabled via the DEFERRED_SEGMENT_CREATION parameter and defaults to TRUE. Delaying segment creation saves space, speeds up the initialization process, and is an optimization geared towards performance and resources.
One problem with this new feature is that these empty tables will not be included when doing an export import using exp / imp, which can lead to missing objects.
If you find this feature to be bothersome, you can turn it off by modifying the parameters to.
alter system set deferred_segment_creation=flase sscope=spfile;
8
resource_manager_always_on - Resource Manager
(located) at11g in,Oracle The resource manager is enabled by default, And it works from time to time., and may trigger competition。
You may be inTOP 5 See a similar scenario in the event:
There are two parameters that work with the settings, This implicit control can be turned off completely when you don't need Explorer:
SQL> alter system set "_resource_manager_always_off"=true scope=spfile; SQL> alter system set "_resource_manager_always_on"=false scope=spfile;
9
_gc_policy_time - RAC Clusters inDRM management
DRM is short for Dynamic Resource Management, which means dynamic resource management. In Oracle RAC, all data blocks are managed by one instance, called Master, which is responsible for taking care of the status of the data blocks under its jurisdiction, including locking, and for authorizing cross-instance access.
If the master node of a data block can be dynamically modified as the block is accessed frequently, then the corresponding GC grant message is reduced substantially. Based on these considerations, the DRM feature was created. But early DRM brought short performance impacts long into the process of re-mastering, which was intolerable in many critical environments.
If you wish to close theDRM This feature, This can be combined with the setting of _gc_policy_time harmony _gc_undo_affinity :
alter system set "_gc_policy_time" = 0 scope=spfile; alter system set "_gc_undo_affinity" = false scope=spfile;
Many of these new features often have a lot of performance impact when used improperly because DBAs don't know enough about their principles. The sections for new features are checked in detail on the Whiteout platform. Help the DBA to quickly identify some hidden problems in the system.
10
cleanup_rollback_entries 、_undo_autotune UNDO Liquidation and adjustment of the
(located) atUNDO in the management of the, How to set the retention time, Clear rollback segment entries, liberate (a prisoner)UNDO space, Very important in high transaction rate databases。
_cleanup_rollback_entries - specifies the number of ENTRIES per rollback when rolling back, defaults to 100, can be set higher to increase the speed of rollbacks.
_undo_autotune - used to automatically adjust undo retention time, set _undo_autotune=true so that undo_retention no longer applies and Oracle determines tuned_undo_retention on its own.
The following settings make adjustments to these features when needed.
alter system set "_undo_autotune" = false scope=spfile; alter system set "_cleanup_rollback_entries" = 1000 scope=spfile;