The Geriatric Instance - Managing the legacy Oracle database is not
an easy task.
Back in 1995 I wrote an article about how the
legacy databases (the IMS hierarchical database and the IDMS network
models) slowed the
adoption of the new relational databases such as DB2 and Oracle, and
things have not changed.

The legacy databases die a slow death
As Oracle databases pass the decade mark, they begin to suffer from a
host of age-related problems.
Just like people, the sheen is gone, parts become unwieldy and
creaky and some areas of the database need ongoing medical attention.
·
Loss of Documentation
- The CASE tool that was used to design the original database went out
of business eight years ago, and nobody alive understands the exact
rationale behind many of the table design decisions.
·
Poor features utilization
- It's not uncommon to find aging systems from Oracle7 that still run
under the ancient rule-based optimization, and many important features
are not adopted for fear of causing production problems.
·
Over-normalized Design
- The constant falling prices of disk and RAM have a direct impact on
Oracle design practices.
The third normal form design that was perfect back in 1995 is no longer
suitable in today's world where disk is cheap and introducing redundancy
improves performance.
Databases in 3NF suffer unnecessary table joins.
- Sub-optimal code - It's not hard to find poor Oracle
code. Many offshore "bargain" systems can be made to run
faster, without the expense and risk of re-writing thousands of
lines of code. Some examples of bad code include large systems
where optimizing the code (to fix the root cause) would take months
and cost hundreds of thousands of dollars:
- Packages - Vendor application products (i.e. ERP,
MRP, etc.) that are database neutral and not optimized for
Oracle.
- Convoluted code - I've seen shops with thousands of
lines of unmanageable code (especially Pro*C), so poorly written
that Eisenstein himself could not figure it out.
- Granular Network calls - I've seen vendor packages
(using SQL*Forms and JDeveloper) that make individual requests
to Oracle for every screen item, causing huge network
bottlenecks. What else can you do when you cannot change the
vendor's code?
- Sub-optimal client-server architectures - The are
many vendor packages with architectures that are "generic" and
not Oracle-centric.
So, what can we
do to improve the treatment of these legacy Oracle databases?
Some shops have recognized that they rapid advancement in processing
power (combined with falling costs) allow them to keep their performance
at acceptable levels by "throwing hardware" at the problem. This
is a great solution in cases where it is too costly to completely
re-design an obsolete database.

Using hardware
to keep aging database alive is a well-known secret
See my notes here on
using hardware as an Oracle tuning tool.
Oracle provides us with several tools to allow us to upgrade and enjoy
newer, robust features that were not available when the database was
created:
·
Dynamic denormalization
- Using Materialized Views you can simulate
the introduction of redundancy into the data model.
·
Real-world testing
- Using the
Oracle 11g SQL Performance Analyzer, changes from upgrading the
underlying Oracle software can be completely tested using a real-world
workload.
Each new release of Oracle bring s a host of new
features, some critical, some trivial.
See
here for my complete list of new features by release.
Some of the most common under-utilized features of geriatric
aging Oracle databases include:
·
Materialized Views
- Using Materialized views greatly aids in data warehouse and dynamic
denormalization.
·
Partitioning
- Oracle's divide-and-conquer approach improves manageability and SQL
performance.
·
Function-based indexes
- A super feature that is not used frequently enough, an FBI allows for
you to "match" there WHERE clause of any SQL statement to an index,
bypassing expensive full-scan operations.
·
Cost-based optimization
- Starting about Oracle9i, the cost-based optimizer was de-bugged to the
point that it could be reliably used in a production environment.
A common problem with shops is choosing the default optimizer
mode of all_rows, which may not be ideal for their systems, where a mode
like first_rows may offer better SQL response time.
·
Automatic Workload Repository
- Automatic Workload Repository (AWR) defaults to a collection interval
every 30 minutes and collects data that is the foundation for all of the
other self-tuning features. AWR is very much like STATSPACK, especially
the level-5 STATSPACK collection mechanism where top SQL is collected
every hour, based on your rolling thresholds for high-use SQL. In
addition to the SQL, AWR collects detailed run-time statistics on the
top SQL (disk reads, executions, consistent gets) and uses this
information to adjust the rolling collection threshold.
·
RAC instance load balancing
-
Starting in Oracle 10g release 2, Oracle JDBC and ODP.NET provide
connection pool load balancing facilities through integration with the
new "load balancing advisory" tool. This replaces the
more-cumbersome listener-based load balancing technique.
Analytic SQL - Oracle analytic functions greatly improve the ease of
writing complex SQL queries and also improves the performance of complex
SQL.
When upgrading to the latest release of Oracle it's important to
carefully choose those features that you want to enable. For
complete details, see the book "Oracle
11g New Features" authored by Oracle ACE's Steve Karam, Lutz
Hartmann, Brian Carr with V. J. Jain.
When considering the needs of an aging
Oracle database it's always wise to consult with experienced
Oracle DBA's who have lived through the life cycle of your
system, experts who know the path to the fountain of youth for
your database. |
 |