Friday, March 20, 2015

Tuning tips of ATM Performance Testing Suite for OpenEdge DataServer

Introduction

Performance testing is used to determine the speed or effectiveness of a computer, network, product or device. This process can involve quantitative tests done in a lab under ideal conditions, such as measuring the response time or measuring the Transactions per second (TPS) in database at which a system functions.

The ATM benchmark has served as a database performance evaluation tool for many years. ATM application is modified according to DataServer. We are running ATM for Oracle and MSS DataServer  

Database Parameters

Server side Parameters

This should be a test of the database engine, so bottlenecks involving disk I/O, shared memory, and the BI need to be eliminated.  For each machine we assessed the amount of free memory available, and the amount of disk space available.  The following guidelines were used:

Load the entire database into shared memory.  If this is not initially possible, change the SCALE of the database to a smaller number so that the database can totally reside in shared memory.

DBBSZ          -         DBBlock size (4K in Linux and Windows and 8K in Solaris)
SCALE           -         Database size to load (10)
NUMLOADERS          -         Number of load processes (4)
CLSZ            -         BI cluster size (196608 KB)
BIBSZ           -         BI block size ( 16 KB)
NUMAPW       -         Number of APW’s
-n                 -         number of users 200
-L                 -         Lock table size  10240Kb
-B                 -         Database buffer pool Size  64000
-bibufs                    -         BI Buffer Size 300
-spin             -         5,000 times the number of CPUs on the machine.
-napmax       -         65     
-rand            -         randome number generator (1) 

Client Side parameters

The standard ATM client parameters specify the use of –rand 2 for the type of random number generator.  A value of 1 (the default) indicates that OpenEdge should use the original generator; specify a value of 2 to use the alternate.  The original number generator always generates the same random sequence; that is, the numbers that it generates are random, but each time a session starts, it gives you the same set of numbers from the last session.  If you have to generate a different sequence of random numbers, specify the alternate generator.  This generator returns a number from a pseudorandom sequence of numbers rather than a truly random sequence.

All the clients run the application with ‘-b’ option i.e in batch mode.

Tuning Tips for Databases

  • Place the OpenEdge installation on a different file system
  • Database files and bi files on should be on different file systems
  • Fit the entire database into shared memory.  If there is not enough shared memory available, use the SCALE factor to reduce the size of the database so that it will fit.
  • DB Reads (promon) should be below 200 per second.
  • Checkpoints (promon R&D) should occur no frequently than once per minute, and preferably once every two minutes.
  • Buffer hits (promon) should be 99%.
  • Waits for Record Locks (promon) should be less than 5%.
  • Waits for BI Buffers (promon) should be less than 5%.
  • Writes by APW should be greater than 95%.
  • Writes by BIW should be greater than 95%.

  


Tuning Tips for Operating Systems
The best practices to manage Windows System/Services

  • Put the huge application services in manual mode and you may stop them if they are not required. Ex Oracle Service, SQL server, Progress Adminserver etc.
  • We may disable un necessary services running on the machine.
  • Put the services you want to stop in manual mode. Other wise they will restart on reboot.
  • We can defragment the Disk partitions (type “dfrg.msc” from run prompt) for better performance and space management inside the Disk.
  • System clean up also works good and cleaning the unnecessary stuff from temp folder and recycle bin. (type “cleanmgr” from run prompt)
(we can do this disk defragment and disk clean up stuff once in a month,  else it can be  scheduled it as well)




Oracle database Performance Tuning

Oracle database Performance Tuning

Oracle database Performance Tuning FAQ:
Remember: The best performance comes from the unnecessary work you don't do.

 Why and when should one tune?

One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
  • The speed of computing might be wasting valuable human time (users waiting for response);
  • Enable your system to keep-up with the speed business is conducted; and
  • Optimize hardware usage to save money (companies are spending millions on hardware).
Although this site is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari.

Where the tuning effort should be directed?

Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
  • Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
  • Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.

  • Memory Tuning:
Properly size your database buffers (shared pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.
  • Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
  • Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
  • Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.

What tools/utilities does Oracle provide to assist with performance tuning?

Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:

When is cost based optimization triggered?

It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.

Generally, the CBO can change the execution plan when you:
  • Change statistics of objects by doing an ANALYZE;
  • Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).

How can one optimize %XYZ% queries?

It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.

Where can one find I/O statistics per table?

The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the catio.sql script.

My query was fine last week and now it is slow. Why?

The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
  • Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
  • Has OPTIMIZER_MODE been changed in INIT.ORA?
  • Has the DEGREE of parallelism been defined/changed on any table?
  • Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
  • Have the statistics changed?
  • Has the SPFILE/ INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
  • Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
  • Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.

Does Oracle use my index or not?

One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:
SQL> CREATE TABLE t1 (c1 NUMBER);
Table created.
 
SQL> CREATE INDEX t1_idx ON t1(c1);
Index created.
 
SQL> ALTER INDEX t1_idx MONITORING USAGE;
Index altered.
 
SQL>
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES NO
 
SQL> SELECT * FROM t1 WHERE c1 = 1;
no rows selected
 
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES YES
To reset the values in the v$object_usage view, disable index monitoring and re-enable it:
ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING   USAGE;

Why is Oracle not using the damn index?

This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
  • USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
  • USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
  • USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
  • Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.

When should one rebuild an index?

You can run the ANALYZE INDEX VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

How does one tune Oracle Wait event XYZ?

Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
  • db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
  • buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
  • log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks

 

What is the difference between DBFile Sequential and Scattered Reads?

Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from   sys.v_$system_event a, sys.v_$system_event b
where  a.event = 'db file sequential read'
and    b.event = 'db file scattered read';