Inspirations, captions, ideas and notes.

Archive for the ‘Websphere’ Category

Heap dump and harddisk space shortage

Today, our server was down again. Apparently, the system log recorded complaints of “Out of memory” and subsequently, the server tried to do a couple of heap dumps, but as the application server had reached 100% harddisk utilisation, naturally, this process failed and ultimately caused the system to hang.

What we do not understand was whether it was the harddisk space shortage that triggered the “Out of Memory” error and initiated the heap dump. As our system already uses a script to housekeep the system upon 90% utilisation, there is 10% (about 500MB) allowance for the system to continue running. Unfortunately, this was insufficient for the heap dump.

I’m new to all this still, and can’t quite understand why this heap dump is needed, so I’m turning to my good friend Google.  I’ll include what I find as I find them…

http://tutorials.beginners.co.uk/read/category/85/id/213 says

A typical application spends the majority of its time waiting for user input or for the completion of some other type of I/O operation, which is why multitasking is practical. However, some applications are CPU-bound, and advances in both hardware and software technology have allowed Java code to execute more quickly and to provide acceptable performance for those applications. Hardware improvements have come primarily in the form of faster processors, but the declining cost of memory also allows many applications to run more quickly. In addition, driven by competition among vendors, Java Virtual Machine implementations have become faster and more sophisticated, allowing Java code to execute much more quickly.

Despite the changes that have taken place, you may find your Java code running more slowly than you (or your users) would like, and it’s sometimes necessary to take steps to improve its speed. Another area that’s sometimes a concern to Java programmers is that of memory utilization. Java applications can sometimes use a large amount of memory, and it’s sometimes difficult to determine why that occurs.

So this chapter:

  • Instructs on how to diagnose problem areas in your code.
  • Describes some of the many options that are available to improve the speed of execution.
  • Provides information that describes how you can identify the portions of your code that are responsible for consuming memory, and discusses ways to reduce the amount used.
  • Describes how to use the garbage collector effectively.

However, before we can look at how to make our code run better we need to evaluate its current performance bottlenecks.

Locating the Source of a Performance Problem

Although it may seem obvious, the first thing you should do when you detect a problem with your application’s performance is to determine where most of its time is spent. It’s often said that the “80/20 rule” (sometimes called the “90/10 rule”) applies to how much of a processor’s time is spent executing an application’s code. In other words, the implication is that a large portion (say, 80%) of the processor’s time will be spent executing a small portion (the other 20%) of your application code. Assuming that this is true, and it usually is, you can significantly improve your application’s performance by identifying the areas of your code that consume most of the processor’s time (sometimes called hot spots) and changing that code so that it runs more quickly.

One way to identify hot spots and other performance problems in your application is to use a commercial profiler application, such as OptimizeIt from Intuitive Systems or JProbe from Sitraka (formerly KL Group). These products are easy to use and will usually provide the functionality you need to locate the source of a performance problem. However, while reasonably priced, neither OptimizeIt nor JProbe is free, and they may not be available for the platform on which you’re debugging.

Another easy way to identify hot spots is to execute your application with HPROF enabled if the Java Virtual Machine you’re using supports it. This feature is currently integrated into JavaSoft’s Windows and Solaris Java 2 JVM implementations, and it uses the Java Virtual Machine Profiler Interface (JVMPI) to collect and record information about the execution of Java code. Among other things, JVMPI allows you to profile object creation/memory utilization, monitor contention, and execution times.

HPROF Output

When HPROF is enabled, its default behavior is to write a report to a disk file named java.hprof.txt once your application terminates. As we’ll see, you can control the format and contents of the report to some extent, but the output will contain some or all of the following items of information:

  • Explanation/comments
  • Thread information
  • Trace entries
  • Monitor dump
  • Heap dump
  • Allocation sites
  • CPU samples/times

Before we look at the sections of the report, let’s have a quick look at some of the options we have when running HPROF.

HPROF Options

To view the options that are available with HPROF, you can execute the following command:

java -Xrunhprof:help

If the virtual machine implementation you’re using supports HPROF, it will produce output similar to that shown below. Note that the -X indicates that this is a non-standard option that may or may not be supported in future JVM releases:

Hprof usage: -Xrunhprof[:help]|[<option>=<value>, ...]

Option Name and Value Description       Default
--------------------- -----------       -------
heap=dump|sites|all  heap profiling     all
cpu=samples|times|old CPU usage        off
monitor=y|n      monitor contention   n
format=a|b       ascii or binary output a
file=<file>      write data to file   java.hprof(.txt for ascii)
net=<host>:<port>   send data over a socket write to file
depth=<size>      stack trace depth    4
cutoff=<value>     output cutoff point   0.0001
lineno=y|n       line number in traces? y
thread=y|n       thread in traces?    n
doe=y|n        dump on exit?      y

Example: java -Xrunhprof:cpu=samples,file=log.txt,depth=3 FooClass

We’ll look at the uses of the more general options here, and cover the others as we discuss individually the parts of HPROF output.

The doe Option

It’s acceptable in some cases to generate profile information when your application exits, which is HPROF’s default behavior, but in other cases you’ll want the information to be written during execution. You can prevent the profile from being written on exit by specifying a value of “n” for the doe (“dump on exit”) parameter. To generate profile information during execution, rather than after the program has finished, you can press Ctrl-Break on Windows or Ctrl-\ (backslash) on Solaris.

The format Option

HPROF’s default behavior is to create a report in human-readable (ASCII) form, although you can store the data in binary format using this option. If you do specify format=b, you must use a utility such as the Heap Analysis Tool (HAT) provided by JavaSoft to view the output. That utility is currently unsupported, but is useful because it provides an easy way to analyze memory utilization.

When you execute the HAT utility, you must specify the name of a file that was created by HPROF and you may specify a port number or allow it to use its default (7000). When the utility runs, it creates a web server that listens on the specified port for HTTP requests, allowing you to view a summary of the HPROF data through a browser. You can do so by entering a URL of http://localhost:7000, (assuming that you’re using the default port number) which will produce a display that lists the classes currently in use by the JVM. By following the various hyperlinks, you can examine different representations of the HPROF data, including one that lists the number of instances currently in existence for every class that’s in use. In other words, the HAT utility provides a convenient way to easily browse the HPROF data and many different summaries of that data.

The file Option

HPROF normally sends its output to a disk file named java.hprof.txt if it’s creating ASCII output or java.hprof if it’s creating binary data (you specify format=b). However, you can use this option to specify that the output should be sent to a different file, as shown below:

java -Xrunhprof:file=myfile.txt MyTest

The net Option

You can use this option to have HPROF’s output sent over a network connection instead of being written to a disk file. For example, it might be useful to use the net option if you wish to record profile information for an application that’s running on a middleware server. An example of how to use this option is shown below, where a host name and port number are specified:

java -Xrunhprof:net=brettspc:1234 MyTest

Before the MyTest class is executed, a socket connection is established to port 1234 on the machine with a host name of brettspc, and the profile information generated by HPROF will be transmitted over that socket connection.

Explanation/Comments Section

This section is identical for every file that’s created by HPROF, and it provides a brief description of how to interpret the remainder of the output generated by the utility.

Thread Summary Section

This portion of the report generated by HPROF shows a summary of the threads that were used. To illustrate how that information appears, we’ll define a simple application that creates and starts two threads:

public class ThreadTest implements Runnable {

public static void main(String[] args) {
 ThreadTest tt = new ThreadTest();
 Thread t1 = new Thread(tt, "First");
 t1.setDaemon(true);
 Thread t2 = new Thread(tt, "Second");
 t2.setDaemon(true);
 t1.start();
 t2.start();
 try {
  Thread.sleep(2000);
 } catch (Exception e) {}
 ;
 synchronized (tt) {
  tt.notify();
 } 
 try {
  Thread.sleep(2000);
 } catch (Exception e) {}
 ;
} 
public synchronized void run() {
 try {
  wait();
 } catch (InterruptedException ie) {}
 ;
} 

}

This application creates two threads, each of which executes the run() method of a single ThreadTest instance, and starts those threads. It then immediately sleeps for two seconds to allow both threads to become blocked by the wait() method in run(), calls the notify() method to wake up one of the threads, and then sleeps for another two seconds. Therefore, one of the two threads that were created should exit prior to the main() method’s completion (because it is removed from the wait queue when notify() is called), while the other should still be blocked by the wait() call. If we use HPROF to profile the thread activity within this application, we should see that behavior reflected in the output:

THREAD START (obj=7c5e60, id = 1, name="Signal dispatcher", 
group="system")
THREAD START (obj=7c6770, id = 2, name="Reference Handler", 
group="system")
THREAD START (obj=7ca700, id = 3, name="Finalizer", group="system")
THREAD START (obj=8427b0, id = 4, name="SymcJIT-LazyCompilation-PA", 
group="main")
THREAD START (obj=7c0e70, id = 5, name="main", group="main")
THREAD START (obj=87d910, id = 6, name="First", group="main")
THREAD START (obj=87d3f0, id = 7, name="Second", group="main")
THREAD START (obj=842710, id = 8, name="SymcJIT-LazyCompilation-0", 
group="main")
THREAD END (id = 6)
THREAD END (id = 5)
THREAD START (obj=87e820, id = 9, name="Thread-0", group="main")
THREAD END (id = 9)

This portion of the HPROF output lists each thread that was started, and if the thread ended while profiling was taking place, that fact is recorded as well. Each entry identifies the address of the Thread object, the identifier (“id”) value assigned to the thread, the name of the thread, and the name of the associated ThreadGroup.

As expected, this output indicates that two threads (“First” and “Second”) were started after the system-generated thread that executes the main() method (“main”). In addition, one of those two threads (“First” with an identifier value of 6) exits prior to the termination of the main thread as expected, while the other was still running when the profile information was created.

Note that some of the threads listed above are dependent on the particular JVM implementation used, in this case the Symantec JIT JVM.