Operations 14 min read

Mastering Application Performance Diagnosis: Layered & Segment Approaches

This article outlines a comprehensive performance testing workflow, introduces layered and segment diagnostic methods, presents a detailed Apache/Tomcat/Linux/Oracle case study with LoadRunner and Nmon, and discusses monitoring metrics, analysis results, and practical recommendations for optimizing system performance.

dbaplus Community
dbaplus Community
dbaplus Community
Mastering Application Performance Diagnosis: Layered & Segment Approaches

Introduction

With the rapid growth of Internet usage, enterprise IT systems face increasing load, making application performance diagnosis a critical step in performance testing.

General Performance Test Process

The typical workflow includes:

Requirement analysis – define test objectives, scope, strategy, model, environment, schedule, entry/exit criteria, responsibilities, metrics, and test cases.

Environment preparation – deploy monitoring tools, configure parameters, install scripts, and set up performance testing tools.

Load execution – record and write scripts, design test scenarios, monitor the execution, and collect data.

Result analysis and optimization – evaluate TPS, response time, CPU/memory/I/O, transaction pass rate, and identify bottlenecks for code, SQL, configuration, network, or hardware improvements.

Layered Diagnosis Method

The layered approach examines five dimensions: application, middleware, network, operating system, and database. Typical metrics for each layer are illustrated in the original figures.

Case Study: Apache/Tomcat + Linux + Oracle

A simple scenario uses LoadRunner for load generation and Nmon for Linux monitoring.

Key monitoring commands:

./ nmon_x86_fedora5 –fT –s 5 –c 100
nohup ./ nmon_x86_fedora5 –fT –s 5 –c 100
# sort -A test1_090308_1313.nmon > test1_090308_1313.csv

Apache monitoring requires enabling server-status in /conf/httpd.conf and collecting metrics such as Total Accesses, Total kBytes, CPULoad, ReqPerSec, BytesPerSec, IdleWorkers, BytesPerReq, and BusyWorkers.

Tomcat can be monitored via the Manager module, custom shell scripts, or JConsole for JVM metrics.

Web Server Layer Analysis

Under 200 concurrent users, Apache connections reach 226 BusyWorkers out of 256 total, indicating the need to increase max connections.

Application Server Layer Analysis

Tomcat thread usage stays within normal ranges, showing stable performance during the test.

Database Server Layer Analysis

Resource usage (CPU, memory, I/O) remains within acceptable limits, though detailed DB metrics like wait time and slow queries are not examined.

Overall Result Analysis

With 100 users and no browser cache, CPU and memory on application and database servers stay idle, but several transactions exceed response time expectations (e.g., date selection ~140 s, appointment page ~57 s, location selection ~28 s). The analysis identifies heavy JS/JSON processing, large image resources, and suggests compression and code optimization.

Method Summary

Layered analysis provides deep insight across each tier but requires extensive tooling and expertise, and may struggle to produce a clear end‑to‑end call chain.

Segment Diagnosis Method

APM tools collect performance data per business segment, linking URL → middleware → Java code → SQL, enabling precise bottleneck identification without the overhead of full‑layer monitoring.

Conclusion

Both layered and segment methods have merits; combining them yields comprehensive performance diagnostics. Key takeaways include monitoring critical metrics, compressing large assets, adjusting Apache connection limits, and considering separate deployment of web and application servers for scalability.

Opsdiagnosticsapplication monitoringloadrunnerperformance-testingnmon
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.