By Robert MacAdams on Sunday, 03 March 2019
Category: Uncategorized

IBM i Performance Optimization: Improving Application Response Times

Keeping your IBM i optimized for optimal response times requires regular monitoring of system resources and identifying various elements that affect application performance. Poor response times of only two or three seconds delay can quickly get compounded over time when poorly written applications are involved or database maintenance has been neglected. Like most performance degradation issues on the IBM i, response time issues only get worse as the number of transactions multiply. Ignoring system performance issues affects productivity and frustrate customers, which in turn increases costs and affects revenue. Below are some ideas to consider if a major IBM i upgrade is not a viable budgetary option.

 
Consider the cost benefits of optimizing your IBM i system resources
The number one factor that affects response times is available processing power (CPU), and the most expensive resource in your IBM Power system. Even new Power9 systems can experience performance degradation. If your system is experiencing poor response times, keep an eye on capacity utilization throughout the day, and check how close CPU utilization gets to 100%, especially during peak periods of the day. If you spot a pattern, perhaps there are some jobs you can plan on running a different time of day when CPU is regularly underutilized. Any system administration and batch jobs that can be scheduled at lighter periods are good examples. Take note of the applications and types of jobs consuming the most CPU resources. If these jobs involve SQL, pay attention to the number of file opens they are performing. If this number is high for the system, the fix can be as simple as changing how the programs run queries to keep files open between calls. You also may consider using IBM i Workload Groups to control the amount of CPU, memory pools and sub-systems to ensure critical applications get the resources they need to run optimally, while restricting less important jobs from stealing their needed resources.
 
Most IBM i systems contain between 15-40% of obsolete and unnecessary data consuming disk space, which is often due to poor database and spool file maintenance. Bloated databases will definitely impact response times and application performance. Unnecessary data includes records that have been logically deleted, but has not been physically removed (files that need to be reorganized). Therefore, all this useless data is being brought into the buffers during read operations and consuming I/O resources. When programs or SQL operations determines the data is obsolete data and not indexed, they still must contend with this data to read and filter it out, which consumes valuable I/O operations. If the column is indexed, the obsolete data creates indexes that are larger than they should be.
 
To ensure optimal response times, examine the data occupying your disk space. Identify the applications consuming the most disk space, growth rates and for each object type and file. Do any objects appear to be obsolete? What percentage of disk space is consumed by logically deleted records that has not been physically deleted? Identify files bound by deleted records. Are there any database files that have become too burdensome to process because of the massive amount of embedded deleted records are continually being passed in and out of buffers? Can any of the data be archived or good candidates for compression?
It is a good idea to keep the versions of software programs in use, as well as the IBM i hardware features installed on your system up to date with recommended PTFs and fix levels applied. I know of many instances where the largest transaction volumes were all using older versions of software that caused all sorts of performance issues. In many instances it was due to an old version of Java being used, causing overly utilizing CPU, looping, synchronization, I/O contention and many other issues all at the same time. Furthermore, some system and application hangs and errors are due to your Power system hardware features that require microcode or firmware updates. IBM regularly makes hardware component updates available for customers to download for processors, memory, SSD and HDD, tape drives, SAS adapters (including RAID controllers) and just about every other IBM i feature, element and cable in your Power system.
 
If your company needs to retain historical data that is rarely or no longer used, consider archiving this data and remove it from production databases to keep optimal response times for users and customers. After records are archived, delete them logically and physically by doing a file reorganization using the RGZPFM (Reorganize Physical File Member) command. If your IBM i has very limited or non-existent maintenance windows, you may need to resort to using compression until the proper time arrives. Alternatively, if your applications support the re-use deleted records parameter, you may be able to use this option to reclaim the space when new records are created. Any new maintenance procedures on production systems should be well planned and thought out, especially if they may result in unplanned downtime or possible data loss.
 
Spool files are sometimes forgot about, especially on environments with minimal IT resources and no automated purging procedures are in place. If this is the case, deleting useless spool files can free up a lot of disk space. Some systems can go back many years and free up 20% or more of usable disk space. Likewise, any system or application logs should also be removed from the system if they are no longer relevant for solving current issues and trouble shooting. Most logs are only used to capture errors and solve problems happening at the moment or in very close proximity to today. Automating spool file archiving and purging of spool files and logs can very quickly be setup using system management tools.
 
Some systems can benefit from a not very well known IBM i feature called the Change Program (CHGPGM) command, which can clean up all programs written RPG, COBOL and C to run more efficiently. When the CHGPGM is used, it translates the programs during the compile process into W-code and then into machine instructions. This may be a great way to optimize old or poorly written programs ridden with redundant instructions and more efficient code structure. Performance improvement results can be significant and barely noticeable. The CHGPGM command has 3 different settings which will also affect the results, of which full optimization can take a considerable amount of time to use. Furthermore, if you are relying on a software vendor to provide new versions and fixes, using the CHGPGM as an optimization tool will be a never ending process unless you can convince your vendor to use it in its distribution. Most importantly, this optimization tool is not flawless for some programs, so read IBM documentation and test the results thoroughly.
 
Consider the cost benefits of optimizing your IBM i system resources, because the ramifications are greater than you may think. Using IBM i CPU efficiently is the most important factor, because it’s the most expensive component in the equation. Optimizing your applications, programs, database files, SQL queries and system updates is critical to managing your CPU usage and ongoing costs.
 
Not all performance issues affecting response times are simple to identify or fix, and some business systems are more susceptible to growth and modification variances than others, which will inevitably impact performance regardless of how optimized your system is. When achieving optimal response times with available resources is a dead end, you may need to consider a minor upgrade to stop the bleeding. Depending on the age of the system, some major upgrades even pay for themselves. Ask your account representative to show you the software and maintenance cost savings comparison when they send you your new or refurbished upgrade quote.

Related Posts