Orient meets Italy – Exotic dressing for salads using ginger, thai basilico and balsamic cream

Tonight while my wife was putting the kids to sleep, i thought about creating a fresh, exotic, splashy dressing for a classical garden salad (lettuce, cucumbers, tomatoes, radish, avocado)

So i started with fresh ginger and squeezed lemon juice. To mild the soureness and spiceyness of the both above i used a little bit of balsamic cream and honey. Mixed that well with olive oil and added chopped  thai basilico and lemon balm. The thai basilico has a sort of freshness so different to the italian one, mixed with the lemon balm it just ROCKS.

My wife said it was a dream of a dressing, so her being my best critic, i said it’s worth writing it down. So here it goes

Oriental/italian dressing

Dressing for two portions

Ingredients:

  • 2 cm cube cut of ginger, freshly shredded
  • squeezed juice of a quarter of a lime
  • one tablespoon of balsamic cream
  • about 5 tablespoons of extra-virgin olive oil of best quality (you do NOT want a bitter olive oil, it will spoil it all)
  • 5 small leaves of thai basilico
  • 2 small leaves of lemon balm
  • on teaspoon of honey

Steps:

  1. Shred the ginger and mix it with the lemon juice and balsamic cream
  2. Add a tablespoon of honey, and mix well
  3. Mix in the olive oil and beat it well for a minute or so
  4. Chopp the thai basilico and lemon balm and mix them in

 

The shift

I’m on paternity leave. That gives me a little bit more time to create instead of just cook. And i love this.

Time is now the most precious resource that one might have. And of it, my family takes the most, which makes me happy. We get to do a lot during the days (and nights haha), and it is just awesome to just leave the work aside for some time…

And the “me” time, the small amount of time after kids are asleep…it’s all about cooking. I guess it’s pretty much the only hobby i have left, but the one i will never give up.

So for the next couple of months, i’ll be just spending time with my wife and kids, travelling, cooking and enjoying life the way i see it. No more tech-stuff for a while🙂

See you around

 

 

Categories: Uncategorized

Groovy JMX Bean Monitoring

Head Note

It’s been a while since i have last written an article, i guess life got a little bit more complex ever since it was filled by our two little kids🙂 Time is now such an expensive resource, haha.

Anyway, back to the topic.

I have finally had some time at work to put some work into refactoring our Load Testing Framework. One of the key topics i had set my eyes on was a reliable way of monitoring the application servers, a way with very low overhead, stable and most of all adaptable. First i played a lot with the REST Monitoring Interface that Glassfish offers, check my other postings relating to that. REST Monitoring was quite cool, but it only allowed monitoring what Glassfish allowed by setting the respective monitoring levels ( JDBC connection pool, EJB Container, etc.) While that may suffice to some, it had following disadvantages:

  • Monitoring could be performed only on the levels exposed by Glassfish ( through module monitoring levels), meaning i could not get any System Load information for example
  • Each resource had to be queried separately
  • Each query result had to be parsed in order to be imported into the monitoring database
  • Relative overhead
  • Sometimes, under heavy load, resources were not available for querying

Besides that, i had to use a combination of CURL for performing the Request, XML processing and unix editors ( like sed or awk ) to get all this results straight. Not to mention the effort i had to put whenever i needed to set up a new monitoring item.

Further on, i had the problem that i could only monitor one application server at a time. Since we run our tests in a distributed environment, i needed something that i could use to easily monitor one to many application servers, at the same time! Since i had played with Groovy before, doing some integration with our Jenkins, i set once more my eyes on it.

JMX MBean Monitoring with Groovy

For the sake of keeping this article simple, we’ll use the best free Java APM  Tool there is on the market: VisualVM.

As defined in the JMX Specification, client applications (as ours) obtain MBeans through an MBean Server Connection. Once we have obtained an MBean server connection, we can use it to query the underlying beans, retrieving their attributes (of course, operating on the beans is possible as well)

Let’s have a look at two software components exposing a lot of debugging information through MBeans, Glassfish and OpenMQ

Glassfish MBean Monitoring

Glassfish by default exposes its’ JMX Server on port 8686. A default connection will also require username and password, which by default are: admin / adminadmin. Once you have connected to the JMX Server, you’ll notice a new tab called “MBeans”

Glassfish MBeans Tab

Glassfish MBeans Tab

Module Monitoring Levels in Glassfish

As you can see, the exposed MBeans are on the left. The most interesting for us will be the ones giving “runtime” information on performance metrics like: used database connections, commited transactions, queue lenghts, number of open connections and so on. In Glassfish you get all this for free, by enabling the so called “Module Monitoring” ( Just expand the MBean called “amx”, and navigate to the child called “Module Monitoring Levels”)

Module Monitoring Levels in Glassfish

Module Monitoring Levels in Glassfish

As you can see, we have enabled the monitoring for some of the modules, among them being:

  • JDBC Connection Pool: information regarding the number of acquired logical connections, physical connections, connection timeouts, etc.
  • Thread Pool: information regarding the number of active threads, total threads, etc
  • JVM: information regarding the current but also peak usage of the memory spaces

Let’s use the JDBC Connection Pool MBean to check on the current pool usage statistics. Expand the node called “amx:jdbc-connection-pool-mon”

JDBC Connection Pool Monitoring - Attribute value view

JDBC Connection Pool Monitoring – Attribute value view

We can see now metrics like:

  • number of free connections in the pool
  • number of logical connections acquired from the pool
  • number of currently used jdbc connections in the pool
  • etc.

So if we’d wanted to take a live peek at the system, check on it’s resource monitoring, we could do that easily following the steps above. But that is not all to it. How about things that Glassfish does not expose via its main Mbean “amx”, things like : Garbage Collection statistics, Memory Usage statistics for all Memory Spaces, Operating System statistics like CPU Usage and so on? Let’s take a look at the “java.lang” Mbean

Other MBeans

Other MBeans

We can retrieve CPU usage, Compilation statistics, Memory Space statistics, Garbage Collection Statistics and so on… The one thing missing is regularly checking this information, and aggregating the results. Which brings me to the todays’ topic. Before that, let’s summarize a little what we have acchieved up to now:

  1. Connect to a Glassfish Server using JMX connection (service:jmx:rmi:///jndi/rmi://server:8686/jmxrmi)
  2. Connect to an MBean Server using VisualVM and the MBeans plugin
  3. Configure monitoring levels in Glassfish (Module Monitoring Levels MBean)
  4. Retrieve some performance metrics from the JDBC Connection Pool Monitor
  5. Retrieve other performance metrics independent of Glassfish (JVM, Operating System, etc.)

Dynamically querying MBeans with Groovy

Let’s say we would like to keep a constant eye on the JDBC Connection Pool, and would like to retrieve it’s metrics every couple of seconds. Let’s take a look at the Groovy MBean specification:

Its constructor looks like this:

GroovyMBean(MBeanServerConnection server, ObjectName name)

We need a MBean Server Connection, and an object name. The object names of all exposed MBeans can be viewed in the metadata Tab of the MBeans Browser:

JDBC Connection Pool - Mbean Metadata

JDBC Connection Pool – Mbean Metadata

So let us first create a Connector class that connects us to the JMX Server. We will use it to pass a list of servers that we want to connect to later.

public class Connector {
	static server
	def serverUrl, user, password
	Connector (serverUrl, user, password) {
		this.serverUrl = serverUrl
		this.user = user
		this.password = password
	}	
	
	def connect () {
		HashMap   environment = new HashMap();
		String[]  credentials = [user, password];	
		environment.put (JMXConnector.CREDENTIALS, credentials);
	
		// Connect to remote MBean Server
		def jmxUrl = 'service:jmx:rmi:///jndi/rmi://'+serverUrl+'/jmxrmi'
		try {
			println jmxUrl
			server = JmxFactory.connect(new JmxUrl(jmxUrl),environment).MBeanServerConnection
			return server
		}
		catch (Exception e) {				
				println("Could not connect to mbean") 
				//System.exit(0)		
		}
	}
}

Since we now have the connection, we need the MBean’s connection name in order to work with it. I will use the expression “entry point” instead of the object name. All we have to do now is to create a new MBean Object:

def entryPoint='amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool'</code>
def monitoringBean = new GroovyMBean(connection,entryPoint)</code>

We could now just go after the beans attributes by retrieving them using the full path. Let’ say we want to monitor the number of connections acquired:

def connAcquired=monitoringBean.numconnacquired.count

That would be all to it, this would give us back the value of the attribute. We could of course do a map of attributes, and retrieve the attributes one by one:

attributeMap = [ numconnacquired:count, numconnfree:current, numconnused:current]
attributeMap.each { key,value -> println (key,value)}

This would then look something like this:

numconnacquired 2117663
numconnfree 41
numconnused 54

Of course, if we wanted to monitor other Beans as well, we’ll have to create an attribute map for each of the beans as well. Why not go the other way around? Get the MBean, get all its attributes, retrieve all values for all attributes, and use only whatever we’d need. Let’s create a map of MBeans, containing a label (that we will later use for logging) and the entry point (object name) We will first do this for the JDBC Connection Pool and Thread Pool Monitoring Beans:

beanMap=[
	ThreadPoolMonitor: [label:'Monitor - Thread Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool'],
	JdbcMonitor: [label:'Monitor - JDBC Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool'],
]

for (e in connectorMap ) {
	beanMap.each { key,value -> setMonitoringBean(e.key,e.value,beanMap."$key".entryPoint,beanMap."$key".label)}
}

Let’s take a look at the setMonitoringBean method, which takes following arguments: server, connection, object name and label:

public static void setMonitoringBean(server,connection, entryPoint, label) {
		
		// Initialize the value Map. We will hold all attributes and values in this map
		def valueMap=[:]
		def timestamp = new Date().format("yyyy-MM-dd HH:mm:ss")
		try {		
			// Connect to the MBean using the given server and MBean Connection Information
			def monitoringBean =  new GroovyMBean(connection,entryPoint)
			// Get the MBeans existing attributes and add them to a map so we could traverse it
			def attributeList = [ monitoringBean.listAttributeDescriptions()].flatten()
			// We need to split the MBean entry point so we can check on the type of the bean. Currently we support two types: composite types (having child attributes) and long types (single values)
			// Traverse each attribute and store the values in the value map
			//def beanObjectName=monitoringBean.name()											
			attributeList.each { 
				def splitter=it.split(' ')
				def attributeType=splitter[1],  attributeName=splitter[2]				
				if (attributeType=='javax.management.openmbean.CompositeData' ){
					// Only store numeric number; filter out timestamp attributes
					monitoringBean."${attributeName}".contents.each { key,value -> 						
						try {
							if ("$value".matches("[0-9].*") && !"$key".matches(".*Time"))  valueMap.put(label+"|"+attributeName+"-"+key,value)
						}
						catch (Exception e) {logFile << 'Exception returned when checking attribute: '+attributeName+'\t'+e+'\n'}	
					}					
				}
				else
				{
					if (attributeType=='long' || attributeType=='double' || attributeType=='java.lang.Long' || attributeType=='java.lang.Integer'){									
					// Directly store the value of the attribute, since it is a simple attribute
					try {
						def valueHolder=monitoringBean."${attributeName}"
						valueMap.put(label+"|"+attributeName,valueHolder)
						
					}
					catch (Exception e) {logFile << 'Exception returned when checking attribute: '+attributeName+'\t'+e+'\n'} } } } // Flush the valueMap into the ResultFile valueMap.each { entry -> resultFile << server+"|"+timestamp+"|"+"$entry".replaceAll('=','|')+"|"+testId+"\n"}
			valueMap.clear()						
		}
		catch (Exception e) {
				logFile << 'Something went wrong\n'
				println e
				valueMap.clear()
		}

This would now return the following results:

myserver|2015-10-29 17:23:11|Monitor - Thread Pool|corethreads-count|5 
myserver|2015-10-29 17:23:11|Monitor - Thread Pool|currentthreadsbusy-count|0 
myserver|2015-10-29 17:23:11|Monitor - Thread Pool|totalexecutedtasks-count|49424 
myserver|2015-10-29 17:23:11|Monitor - Thread Pool|maxthreads-count|1024 
myserver|2015-10-29 17:23:11|Monitor - Thread Pool|currentthreadcount-count|83 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numpotentialconnleak-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnsuccessfullymatched-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnfailedvalidation-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnreleased-count|343676 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|waitqueuelength-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnfree-current|95 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnfree-highWaterMark|95 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnfree-lowWaterMark|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|connrequestwaittime-current|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|connrequestwaittime-highWaterMark|4586 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|connrequestwaittime-lowWaterMark|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnused-current|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnused-highWaterMark|67 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnused-lowWaterMark|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconndestroyed-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnacquired-count|343676 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|averageconnwaittime-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconntimedout-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconnnotsuccessfullymatched-count|0 
myserver|2015-10-29 17:23:11|Monitor - JDBC Pool|numconncreated-count|95 

Some words on the attributes and their types: We can have attributes of type:

  • composite (with subattributes)
  • long
  • integer
  • string
  • bollean

That is the reason we need to check on each of the attributes to see if we have to retrieve it’s subattributes or its value. Doing this we can dynamically retrieve all attributes of an MBean, deciding afterwards which we should use and which not.

If we add a map of servers as well, we’ll only need to connect once, and then retrieve the results by polling the servers periodically

def serverList = [
server1:"server1:"+monitoringPort,
server2:"server2:"+monitoringPort,
server3:"server3:"+monitoringPort,
]

All we have to do now is to add a loop and poll the MBeans in a loop. My implementation relies on the existence of a control file. As long as the file exists, the beans will be polled in 5 seconds interval.

Aggregation of JMX Monitoring collected with Groovy

All we need to do now is to import the data into a database of our choice, and draw the charts accordingly. This would look something like:

Groovy JMX JDBC Connection Pool Monitoring

Groovy JMX JDBC Connection Pool Monitoring

Adding monitoring for a new MBean

All you have to do is to extend the BeanMap with the new (one or more) Beans. Let’s say we would like some statistics on the memory spaces:


beanMap=[
 ThreadPoolMonitor: [label:'Monitor - Thread Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool'],
 JdbcMonitor: [label:'Monitor - JDBC Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool'],
 MemoryEden: [label:'Monitor - MemoryEden', entryPoint:'java.lang:type=MemoryPool,name=Par Eden Space'],
 MemoryPerm: [label:'Monitor - MemoryPerm', entryPoint:'java.lang:type=MemoryPool,name=CMS Perm Gen'],
 MemoryOld: [label:'Monitor - MemoryOld', entryPoint:'java.lang:type=MemoryPool,name=CMS Old Gen'],
 MemorySurvivor: [label:'Monitor - MemorySurvivor', entryPoint:'java.lang:type=MemoryPool,name=Par Survivor Space']
]

That’s it, No other hussle, no nothing. Just add a new server, or a new bean, and there you go. And the best part of it, it connects only once to each of the servers, and then acts as an aggregator…Groovy isn’t it?

An aggregated report could then look like this:

Groovy JMX Monitoring Report

Groovy JMX Monitoring Report

Feel free to use the groovy script i have created, adapt it, extend it and so on. Critic opinions are welcomed as much as improvement ideas😉 Let’s keep the open source going…APM tools can be so expensive noawadays.

I can go home to my kids now. Now this is groovy!

/*  
	Created: Alexandru Ersenie	
	Groovy script for monitoring Application Server over JMX Protocol
	Usage: groovy $script_name $test_id $workspace $jmxPort
	Usage example: groovy jmx_mon.groovy 21322 /home/testing 22086
	
	Define the list of servers you want to monitor : class Main -> serverList
	Define the list of mbeans you want to monitor : class Main -> beanMap
	Define the user and password for the JMX Connection: class Connector (default admin:adminadmin)
	Define the polling period in milliseconds: class Main -> pollTimer (default: 5000)
*/
import javax.management.ObjectName
import javax.management.remote.JMXConnectorFactory as JmxFactory
import javax.management.remote.JMXServiceURL as JmxUrl
import java.util.HashMap
import javax.management.remote.*
import java.text.DateFormat
import java.util.regex.*


public class Monitoring {
	String connection, entryPoint
	static logFile,resultFile,monitoringFile,testId
	//Constructor
	Monitoring () {
		this.connection = connection
		this.entryPoint = entryPoint			
	}	
	public static void setMonitoringBean(server,connection, entryPoint, label) {
		
		// Initialize the value Map. We will hold all attributes and values in this map
		def valueMap=[:]
		def timestamp = new Date().format("yyyy-MM-dd HH:mm:ss")
		try {		
			// Connect to the MBean using the given server and MBean Connection Information
			def monitoringBean =  new GroovyMBean(connection,entryPoint)
			// Get the MBeans existing attributes and add them to a map so we could traverse it
			def attributeList = [ monitoringBean.listAttributeDescriptions()].flatten()
			// We need to split the MBean entry point so we can check on the type of the bean. Currently we support two types: composite types (having child attributes) and long types (single values)
			// Traverse each attribute and store the values in the value map
			//def beanObjectName=monitoringBean.name()											
			attributeList.each { 
				def splitter=it.split(' ')
				def attributeType=splitter[1],  attributeName=splitter[2]				
				if (attributeType=='javax.management.openmbean.CompositeData' ){
					// Only store numeric number; filter out timestamp attributes
					monitoringBean."${attributeName}".contents.each { key,value -> 						
						try {
							if ("$value".matches("[0-9].*") && !"$key".matches(".*Time"))  valueMap.put(label+"|"+attributeName+"-"+key,value)
						}
						catch (Exception e) {logFile << 'Exception returned when checking attribute: '+attributeName+'\t'+e+'\n'}	
					}					
				}
				else
				{
					if (attributeType=='long' || attributeType=='double' || attributeType=='java.lang.Long' || attributeType=='java.lang.Integer'){									
					// Directly store the value of the attribute, since it is a simple attribute
					try {
						def valueHolder=monitoringBean."${attributeName}"
						valueMap.put(label+"|"+attributeName,valueHolder)
						
					}
					catch (Exception e) {logFile << 'Exception returned when checking attribute: '+attributeName+'\t'+e+'\n'} } } } // Flush the valueMap into the ResultFile valueMap.each { entry -> resultFile << server+"|"+timestamp+"|"+"$entry".replaceAll('=','|')+"|"+testId+"\n"}
			valueMap.clear()						
		}
		catch (Exception e) {
				logFile << 'Something went wrong\n' println e valueMap.clear() } } public static void main(String[] args) { testId=args[0] def runtimeFolder=args[1] def monitoringPort=args[2] def beanMap=[:] def pollTimer=5000 resultFile = new File(runtimeFolder+"/monitoring/glassfish_stats.log") monitoringFile = new File(runtimeFolder+"/monitoring/control_file") logFile = new File (runtimeFolder+"/run.log") resultFile.write '' def serverList = [ def serverList = [ server1:"server1:"+monitoringPort, server2:"server2:"+monitoringPort, server3:"server3:"+monitoringPort, ] ] def connectorMap = [:] if (monitoringFile.exists()) {de serverList.each { key, value -> 
				if (key!='tstjms201c') {
					println key
					connectorMap.put(key,new Connector(value,'admin','adminadmin').connect())}
				else {
				println key
					connectorMap.put(key,new Connector(value,'admin','admin').connect())
					}					
			}
			}
			else 
			{
				println("Monitoring file not found, monitoring will now exit")
				logFile << 'Monitoring file not found, monitoring will not be performed' System.exit(0) } beanMap=[ ThreadPoolMonitor: [label:'Monitor - Thread Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=thread-pool-mon,name=network/jk-main-listener-1/thread-pool'], JdbcMonitor: [label:'Monitor - JDBC Pool', entryPoint:'amx:pp=/mon/server-mon[server],type=jdbc-connection-pool-mon,name=resources/EocPool'], MemoryEden: [label:'Monitor - MemoryEden', entryPoint:'java.lang:type=MemoryPool,name=Par Eden Space'], MemoryPerm: [label:'Monitor - MemoryPerm', entryPoint:'java.lang:type=MemoryPool,name=CMS Perm Gen'], MemoryOld: [label:'Monitor - MemoryOld', entryPoint:'java.lang:type=MemoryPool,name=CMS Old Gen'], MemorySurvivor: [label:'Monitor - MemorySurvivor', entryPoint:'java.lang:type=MemoryPool,name=Par Survivor Space'] ] while (monitoringFile.exists()) { { for (e in connectorMap ) { beanMap.each { key,value -> setMonitoringBean(e.key,e.value,beanMap."$key".entryPoint,beanMap."$key".label)}
				}
				sleep(pollTimer)
			}
	}
}
public class Connector {
	static server
	def serverUrl, user, password
	Connector (serverUrl, user, password) {
		this.serverUrl = serverUrl
		this.user = user
		this.password = password
	}	
	
	def connect () {
		HashMap   environment = new HashMap();
		String[]  credentials = [user, password];	
		environment.put (JMXConnector.CREDENTIALS, credentials);
	
		// Connect to remote MBean Server
		def jmxUrl = 'service:jmx:rmi:///jndi/rmi://'+serverUrl+'/jmxrmi'
		try {
			println jmxUrl
			server = JmxFactory.connect(new JmxUrl(jmxUrl),environment).MBeanServerConnection
			return server
		}
		catch (Exception e) {				
				println("Could not connect to mbean") 
				//System.exit(0)		
		}
	}
}

JAMDL – Java Automatic Memory Leak Detector using JMap, Jasper and MySQL

I was working on this idea for a couple of years now, trying to give it a shape, but never really finding the time for the details. The basic concept was simple:

  1. Start monitoring of objects
  2. Run a test
  3. Collect and import monitoring metrics
  4. Use math and estimation to detect memory leaks

The memory leak

When does an object become a memory leak suspect? Quite simple: i will use here two acronyms

  1. SNI – Start Number of Instances
  2. ENI – End Number of Instances

Taking the shortest path, one would say whenever this result is returned:

ENI-SNI > 0

That means: “if the number of instances at the end of the test is higher than the one at the beginning of the test is considered a memory leak” Well, not necessarily:

  • some objects may be initialized only by the test itself when loading specific classes, so they were never there when starting the application server
  • soft references: the objects may be collected, meaning it is up to the Garbage Collector to decide when to remove the object
  • session timeouts: some users close their sessions upon logging out, others (most of them) just close their browser -> session timeout is the one responsible to remove the session and the attached objects, and the timeout may vary depending on the implementation, meaning it is not the Test End Timestamp that is decisive, but the Test End Timestamp + Timeout

Now just saying “higher number of instances” is not enough. We need to evaluate the delta between SNI and ENI. We can for example use the standard deviation. Upon measuring only two values, if the difference is high enough, we can assume that something is not going as planned, and we might have a memory leak

Monitoring the JVM Object Map

Jmap is a great tool (part of the JDK ) that we can use to inspect the memory map. Using jmap, one can at any point in time (with some overhead, of course – not that big ) retrieve a list of all objects residing in the heap. The results would look something like this:

Java Object Map – Objects and number of instances in the heap

1:       1803751      166592520  [C
2:        347130      103348904  [B
3:        565934       74272832  <constMethodKlass>
4:        565934       72452448  <methodKlass>
5:        242821       62836312  [I
6:         55222       60738280  <constantPoolKlass>
7:         55221       40555296  <instanceKlassKlass>
8:        484966       38797280  java.lang.reflect.Method
9:        886626       35851896  [Ljava.lang.Object;
10:        391682       32626104  [Ljava.util.HashMap$Entry;
11:         46464       32591136  <constantPoolCacheKlass>
12:       1351668       32440032  java.lang.String
13:        748338       23946816  java.util.HashMap$Entry
14:        426004       20448192  java.util.HashMap
15:        501680       20067200  java.util.LinkedHashMap$Entry
16:        820685       19696440  java.util.ArrayList
17:        233944       16843968  java.lang.reflect.Field
18:        360930       11549760  java.util.concurrent.ConcurrentHashMap$HashEntry

The first column is a unique key for the object that will not change during the existence of the object in the JVM.

The second column is the number of instances the object occupies in the heap

The third column is the size (in bytes) that the object occupies in the heap.

Like i mentioned before, in order to perform memory analysis, we need to gather at least two metrics:

  • object occupancy before starting the test scenario – SNI
  • object occupancy after ending the test scenario – ENI

Those of you reading this post should know by now the two types of garbage collection (Young and Full) and how garbage collection works. Therefore i will just jump into the details of the problem.

One may now say:  what if the objects are dead, and waiting to be collected by the next garbage collection? Would the results then still be reliable?

We need to make sure that both SNI and ENI are measured after a FULL GARBAGE COLLECTION. That way we’ll make sure there is no dead object waiting to be collected.
So, our scenario up to now runs like this:

  1. Full Garbage Collection -> Retrieve SNI for all live objects
  2. Run the test
  3. Wait for the session timeout
  4. Full Garbage Collection -> Retrieve ENI for all live objects

On the other side, we would still want to see what happens with the objects WHILE the test is running: are they collected by the Young Garbage Collector at all? Is it an increasing line that we see, or it decreases as well? Like i said, we are talking about suspects, so we need some more proof to decide if we deal with a leak or not.

So let’s add a loop and retrieve the TNI (temporary number of instances) every couple of seconds

Presuming our performance test will trigger at least a couple of Young Garbage collections, we could try retrieving the heap occupancy map regularly. Adding timestamps to the results will then allow us to see the lifecycle of the object during the performance test. This would then look something like:

 

ID                Instances   Bytes   Name      TimeStamp
262:          1035         281520  MyObject1,15-00-57
457:          1035          91080  MyObject2,15-00-57
613:           475          45600  MyObject3,15-00-57
642:           414          39744  MyObject4,15-00-57
689:           267          32040  MyObject5,15-00-57
862:           177          18408 MyObject6,15-00-57
1434:           118           4720  MyObject7,15-00-57
283:           788         214336  MyObject1,15-01-30
493:           788          69344  MyObject2,15-01-30
662:           369          35424  MyObject3,15-01-30
699:           308          29568  MyObject4,15-01-30
733:           214          25680  MyObject5,15-01-30
955:           135          14040 MyObject6,15-01-30
1405:           118           4720  MyObject7,15-01-30
285:           726         197472  MyObject1,15-02-03
495:           726          63888  MyObject2,15-02-03
657:           345          33120  MyObject3,15-02-03
696:           284          27264  MyObject4,15-02-03
726:           202          24240  MyObject5,15-02-03
973:           118          12272 MyObject6,15-02-03
1365:           118           4720  MyObject7,15-02-03
318:           411         111792  MyObject1,15-02-36
556:           411          36168  MyObject2,15-02-36
716:           217          20832  MyObject3,15-02-36
786:           138          16560  MyObject5,15-02-36
818:           156          14976  MyObject4,15-02-36
1290:           120           4800  MyObject7,15-02-36

AMDL Reports – Object Lifecycle Reports

We can now expect two types of graphics:

Memory Leak - Increasing trend and no garbage collection

Memory Leak – Increasing trend and no garbage collection

Here we see an increasing trend , without any decreases over time, meaning the object is not being collected at all

Object Lifecycle - No Memory Leak - Stable trend, increasing and decreasing line

Memory Leak – Increasing trend and no garbage collection

Here we see a stable trend, where the objects are being collected

Assuming that at the end of the test we’ll import all monitoring data into the database, and then generate reports containing the three items (SNI, TNI, ENI) , the full list of steps to perform JAMDL would be now:

  1. Full Garbage Collection -> Retrieve SNI for all live objects
  2. Start and run the test
  3. Perform TNI collection while test is running and the session timeout has not occured
  4. Wait for the session timeout and stop TNI collection
  5. Full Garbage Collection -> Retrieve ENI for all live objects
  6. Import the results into the database
  7. Compute the deviation between SNI and ENI
  8. Automatic generation of performance report containing all memory leak suspects that resulted from point 7

This is how a AMDL Session would look like in VisualVM

AMDL Visual VM Session

AMDL Visual VM Session

 AMDL Main Report with Memory Leak Suspects

Integrating the results in the main report could look like this (for presentation purposes i have set the deviation very low to 10)

AMDL Performance Report

AMDL Performance Report

We can now drill into the two memory leaks suspects and see if there is a memory leak indeed:

Object Lifecycle - Drill down report - No memory leak

Object Lifecycle – Drill down report – No memory leak

And since the post is about memory leaks, this would be one then:

Memory Leak - Increasing trend and no garbage collection

Memory Leak – Increasing trend and no garbage collection

Using a relational database you can decide on your own on the implementation of the deviation:

  1. ENI – SNI: You can compute the difference between the ENI and SNI, and set a threshold. For example if at the end of the test i have 100 instances more, do report that as a suspect
  2. STDEV(ENI,SNI): You can compute the deviation between the two values, and set a threshold.

It is up to you to decide on the implementation that suits you best.

 

One last word: theoretically you can use this even in production environment, as long as you do not retrieve the memory map that often, and as long as you do not perform full garbage collection. In that case, of course, the timespan for monitoring must be long enough to allow objects to be collected by the Old Generation Garbage Collection…nevertheless, a point to think of, that could save you some time, and actively report possible memory leak suspects.

Cheers, have fun and enjoy. I will gladly help with further information regarding any of the 8 points above

Alex

 

 

 

One Step Monitoring of Key Indicators in Glassfish 3.1 via REST

February 18, 2014 1 comment

Ever since upgrading Glassfish from v.3.0.1 to V3.1.2.2 i made a note to myself to redesign and simplify the active monitoring i was using in my Load Testing scripts, so that i could easily monitor things like:

  • JDBC Connections used
  • JDBC Connections timed out
  • JDBC Connections free
  • Http-threads busy

and so on.

Since the interface has changed, and my previous monitoring implementation was using also pretty much hardcoded values (of which i am absolutely no fan), it was time to redesign this in a smarter way, making use of the new interface, trying to get all my metrics in one step, or in as few steps as required.

Glassfish 3.1.2.2 Rest Monitoring

Needless to remind you how to enable/disable monitoring (sure you know it by now), but then again, it is not that hard to detail it once more.

First we check on the current status of the monitoring levels

asadmin -p 11048 –passwordfile /opt/glassfish/portal/v3.1.2.2/passwords get server.monitoring-service.module-monitoring-levels.*

*
server.monitoring-service.module-monitoring-levels.connector-connection-pool=OFF
server.monitoring-service.module-monitoring-levels.connector-service=OFF
server.monitoring-service.module-monitoring-levels.deployment=OFF
server.monitoring-service.module-monitoring-levels.ejb-container=OFF
server.monitoring-service.module-monitoring-levels.http-service=OFF
server.monitoring-service.module-monitoring-levels.jdbc-connection-pool=HIGH
server.monitoring-service.module-monitoring-levels.jersey=OFF
server.monitoring-service.module-monitoring-levels.jms-service=OFF
server.monitoring-service.module-monitoring-levels.jpa=OFF
server.monitoring-service.module-monitoring-levels.jvm=OFF
server.monitoring-service.module-monitoring-levels.orb=OFF
server.monitoring-service.module-monitoring-levels.security=OFF
server.monitoring-service.module-monitoring-levels.thread-pool=OFF
server.monitoring-service.module-monitoring-levels.transaction-service=OFF
server.monitoring-service.module-monitoring-levels.web-container=HIGH
server.monitoring-service.module-monitoring-levels.web-services-container=OFF

Now we set the desired monitoring level, let’s take the http-service as example:

asadmin -p 11048 –passwordfile /opt/glassfish/portal/v3.1.2.2/passwords set server.monitoring-service.module-monitoring-levels.http-service=HIGH
server.monitoring-service.module-monitoring-levels.http-service=HIGH
Command set executed successfully.

Let’s check it if it is really enabled (we choose xml as output format, available are html, xml and json)

curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://server:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<map>
<entry key=”extraProperties”>
<map>
<entry key=”entity”>
<map>
<entry key=”corethreads”>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042076</number>
</entry>
<entry key=”count”>
<number>5</number>
</entry>
<entry key=”description” value=”Core number of threads in the thread pool”/>
<entry key=”name” value=”CoreThreads”/>
<entry key=”lastsampletime”>
<number>1392722067843</number>
</entry>
</map>
</entry>
<entry key=”currentthreadsbusy“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>0</number>
</entry>
<entry key=”description” value=”Provides the number of request processing threads currently in use in the listener thread pool serving requests”/>
<entry key=”name” value=”CurrentThreadsBusy”/>
<entry key=”lastsampletime”>
<number>1392738373205</number>
</entry>
</map>
</entry>
<entry key=”totalexecutedtasks“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>123022</number>
</entry>
<entry key=”description” value=”Provides the total number of tasks, which were executed by the thread pool”/>
<entry key=”name” value=”TotalExecutedTasksCount”/>
<entry key=”lastsampletime”>
<number>1392738373205</number>
</entry>
</map>
</entry>
<entry key=”maxthreads“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042076</number>
</entry>
<entry key=”count”>
<number>1024</number>
</entry>
<entry key=”description” value=”Maximum number of threads allowed in the thread pool”/>
<entry key=”name” value=”MaxThreads”/>
<entry key=”lastsampletime”>
<number>1392722067843</number>
</entry>
</map>
</entry>
<entry key=”currentthreadcount“>
<map>
<entry key=”unit” value=”count”/>
<entry key=”starttime”>
<number>1392653042077</number>
</entry>
<entry key=”count”>
<number>150</number>
</entry>
<entry key=”description” value=”Provides the number of request processing threads currently in the listener thread pool”/>
<entry key=”name” value=”CurrentThreadCount”/>
<entry key=”lastsampletime”>
<number>1392737736187</number>
</entry>
</map>
</entry>
</map>
</entry>
<entry key=”childResources”>
<map/>
</entry>
</map>
</entry>
<entry key=”message” value=””/>
<entry key=”exit_code” value=”SUCCESS”/>
<entry key=”command” value=”Monitoring Data”/>
</map>

Of course you can use a browser using the same url connection and have it nicely displayed, but we need the “curled” version, in order to further extract the desired values.

Extracting xml tags with awk and xmllint under bash

Let’s say we now need to extract following metrics:

  • currentthreadsbusy
  • totalexecutedtasks
  • maxthreads
  • currentthreadcount

Since my Linux distribution did not have any xmlextractor, but had an xmlparser and i wanted this solution to be portable on any Linux machine, i decided to go the hard way, and use awk and xmllint.

xmllint : The xmllint program parses one or more XML files, specified on the command line as XML-FILE (or the standard input if the filename provided is – ). It prints various types of output, depending upon the options selected. It is useful for detecting errors both in XML code and in the XML parser itself.

I used a trick here, and reformatted the xml response displaying it as a pretty printed xml with line breaks.

curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool | xmllint –format –

Now i need to get my monitoring items. The trick i used here is as follows:

  1. Extract everything starting with the item i am looking for, in this case currentthreadsbusy searching up to the pattern “/map”. This will return the following fragment

    currentthreadcount“>
    <map>
    <entry key=”unit” value=”count”/>
    <entry key=”starttime”>
    <number>1392653042077</number>
    </entry>
    <entry key=”count”>
    <number>150</number>
    </entry>
    <entry key=”description” value=”Provides the number of request processing threads currently in the listener thread pool”/>
    <entry key=”name” value=”CurrentThreadCount”/>
    <entry key=”lastsampletime”>
    <number>1392737736187</number>
    </entry>
    </map>

  2. Further i need to extract the monitoring value i am interested in. This can be in my case either “count” or “current”, so i will use awk once more and look for the following pattern

    awk ‘/<entry key=”count|current”>/,/<\/entry>/’

  3. This will now return the following fragment:

    <entry key=”count”>
    <number>150</number>
    </entry>

  4. The only thing left here to do is use a regular expression to extract only digits:

    grep -o ‘[0-9]*’

Let us put it all together now:

  1. Store the response of the curl request into a variable:

    http_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool`

  2. Retrieve the desired metric:

    val=`echo $http_mon_response | xmllint –nowarning –format – | awk ‘/currentthreadcount/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `

Since we want to make this dynamic, and use one request, and extract as many metrics as possible, let’s write a small for loop that does that for us.

Suppose we want to retrieve the following monitoring metrics:

JDBC

  • numconnused
  • numconnfree
  • numconntimedout

HTTP

  • currentthreadsbusy

We will create a function called trace_gf_statistics that will post the curl request regularly, and write the outputs into an external file:

function trace_gf_statistics
{
# List of jdbc monitoring items to be retrieved

jdbc_names_short=(numconnused numconnfree numconntimedout)
# List of http thread pool monitoring items to be retrieved

http_names=(currentthreadsbusy)

# Only run the monitoring while this file exists. This file will be removed by the controlling process once the monitoring is stopped
status=`ls /tmp | grep glassfish_stats`
while [ “$status” != “” ];
do
MONITOR_TIMESTAMP=`date +%H-%M-%S`

# Store the JDBC Metrics into a variable
jdbc_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/resources/EocPool`

# Store the HTTP Thread Pool Metrics into a variable
http_mon_response=`curl -k -s -u admin:adminadmin -H “Accept: application/xml” https://myserver:11048/monitoring/domain/server/network/jk-main-listener-1/thread-pool`

# Now iterate through all monitoring items we defined in the beginning and output the results
for jdbc_mon_item in ${jdbc_names_short[@]} ;
do
val=`echo $jdbc_mon_response | xmllint –nowarning –format – | awk ‘/’${jdbc_mon_item}’/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `
echo $MONITOR_TIMESTAMP”:JDBC-“$jdbc_mon_item:$val >> ${JMETER_RESULTS}/glassfish_stats.log
done
for http_mon_item in ${http_names[@]} ;
do
val=`echo $http_mon_response | xmllint –nowarning –format – | awk ‘/’${http_mon_item}’/,/\/map/’ | awk ‘/<entry key=”count|current”>/,/<\/entry>/’ | grep -o ‘[0-9]*’ `
echo $MONITOR_TIMESTAMP”:HTTP-“$http_mon_item:$val >> ${JMETER_RESULTS}/glassfish_stats.log
done

# Post the requests every 3 seconds and then check for the existence of the status file
sleep 3
status=`ls /tmp | grep glassfish_stats`
done
}

Results

And this is how a typical monitoring output file looks like, separated by “:” delimiter.

16-32-32:JDBC-numconnused:131
16-32-32:JDBC-numconnfree:31
16-32-32:JDBC-numconntimedout:0
16-32-32:HTTP-currentthreadsbusy:58
16-32-35:JDBC-numconnused:110
16-32-35:JDBC-numconnfree:10
16-32-35:JDBC-numconntimedout:0
16-32-35:HTTP-currentthreadsbusy:40
16-32-38:JDBC-numconnused:110
16-32-38:JDBC-numconnfree:10
16-32-38:JDBC-numconntimedout:0
16-32-38:HTTP-currentthreadsbusy:36
16-32-42:JDBC-numconnused:103
16-32-42:JDBC-numconnfree:3
16-32-42:JDBC-numconntimedout:0
16-32-42:HTTP-currentthreadsbusy:27
16-32-45:JDBC-numconnused:121
16-32-45:JDBC-numconnfree:21
16-32-45:JDBC-numconntimedout:0
16-32-45:HTTP-currentthreadsbusy:43
16-32-48:JDBC-numconnused:83
16-32-48:JDBC-numconnfree:17
16-32-48:JDBC-numconntimedout:0
16-32-48:HTTP-currentthreadsbusy:7
16-32-51:JDBC-numconnused:126
16-32-51:JDBC-numconnfree:37
16-32-51:JDBC-numconntimedout:0
16-32-51:HTTP-currentthreadsbusy:64
16-32-55:JDBC-numconnused:204
16-32-55:JDBC-numconnfree:74
16-32-55:JDBC-numconntimedout:0
16-32-55:HTTP-currentthreadsbusy:127

You can now import the delimited file into whatever reporting tool you like, generating reports like this:

Glassfish Rest JDBC Monitoring Report

JasperReport-RestMonitoring-JDBC-Monitoring-Report

Glassfish Rest HTTP Thread Pool Monitoring Report

JasperReport-RestMonitoring-HTTP-Monitoring-Report

Needless to say that you can extend your monitoring items in the script above with whatever monitors you may need. It suffices to add a corresponding metric array, the curl request and iterate over the monitoring items.

This is a sample of my performance report, while using Jasper Server and Jasper Reports:

JasperReport-RestMonitoring

I will probably update this script regularly, so come back soon for a new, improved version of it.

Cheers

Alex

Load and performance testing for J2EE – Slides made public

October 9, 2013 1 comment

Hi all,

it has been a while since i have posted on my blog. Although i would have posted more often, a lot of things changed in my life, biggest of them being our son Philip, who came to the world last year in December. I decided therefore to take a little time off and focus more on our family and spending some quality time with the family’s new member🙂

In the light of this, i would now like to return with a post that i have been postponing for a while, and share with you all the slides that i prepared for a presentation that i had held in Hamburg last year, organized by the Java User Group Hamburg, and focusing on Load and Performance Testing for J2EE.

I can only say it was a very successful presentation, in the attendance of about 50+ members of the group, on a highly interesting, but rather not that much talked about topic…performance testing in JAVA

You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)

I would be happy to hear your feedback on this one, being an opinion, a question or even criticism…they are all welcomed

Load and Performance Testing for J2EE – An approach using open source tools – By Alexandru Ersenie

Cheers,

Alex

 

P.S. I will start answering to the comments in the days to come. Sorry for the delay

IReport / Jasper Reports – Working with subreports and collections in Jasper Reports

Hi, and sorry for not updating for a while. I am currently under heavy load, and can scarcely find time to write, although i have several new topics prepared. I am also working on a presentation on Java Performance Testing and Monitoring which i will probably hold here in Hamburg, on the 18th of July. More on that for those interested in a follow-up post.

Now let’s dive into the subject: Working with subreports and collections in Jasper Reports

It seems that several users have been facing this problem, so i thought i wrote an explanatory post on this topic.

1. Building the main report

Let’s start with the main report. This looks like this:

Image

The sub-report is the grey box with a yellowish highlighting. The main report is passing the four following parameters to the sub-report, of which one is a collection:

  • http_request
  • filterstop
  • ic_testconfig
  • filterstart

Please notice how the name matches EXACTLY the expression (the passed parameter has to be named exactly the same as the local variable used in the subreport)

Also notice the properties of the sub-report, highlighted in the screenshot below:

  • Subreport Expression has to be: “repo:statistics”, where statistics is the name of the IReport file containing the designed subreport
  • Expression Class: java.lang.String
  • Connection type: Use a connection expression

Image

2. Creating the sub-report

Let’s create a sub-report in the repository, with the name we just configured in the main report: statistics

My recommendation is to create a single folder in your repository, called “subreports”, and add all sub-reports there. In my case, i have three sub-reports (we will only focus on the statistics subreport):

  • statistics
  • detailed_statistics
  • hudson_statistics

The structure in the repository looks like this:

Image

Let’s take a look at my sub-report. This will receive the collection as input parameters. The collection is actually a series of test id’s that i use to build a report over several test runs (for example, if i run a test twice, once with id 123, and once with id 124, and i want to see a single report of all transactions for both test-runs, i will give both parameters as input:

Image

The query is the one that takes the collection input parameter and processes it. Let’s see how that looks like:

select
count(t) as totaltransactions,
avg(t) as responseaverage,
……
from
testresults tr
where $X{IN,tr.testrun_id,ic_testconfig} and DATE_FORMAT(DATE_ADD(‘1970-01-01 00:00:00′ ,INTERVAL ts*1000 MICROSECOND),’%H:%i:%s’) between $P{filterstart} and $P{filterstop}
group by tr.lb

We will now add this sub-report into the Jasper Server Repository, by adding a new resource from the JRXML File we just created for our subreport. We will have to assign two identifiers:

  • Label: statistics
  • Name: rootavg

 

Jasper Server Repository - Adding a JRXML Resource

Jasper Server Repository – Adding a JRXML Resource

 

Now we add the two identifiers mentioned above

Jasper Server Repository - Labeling the JRXML Resource

Jasper Server Repository – Labeling the JRXML Resource

We can now refresh the repository in the IReport local instance, and see the sub-report added in the location we chose, with the identifiers we just assigned (name is statistics, id is rootavg):

Subreport in IReport Repository

Subreport in IReport Repository

3. Adding the sub-report as a resource to the main report

We have now added both the main report and the sub-report. Well, it is not enough to define the sub-report in the main report. The main report has to know where the called sub-report resides, therefore we need to add it as a resource of the main report. Remember that these resources have to be defined and available in the JasperServer Repository (Server Side).

We start by editing the main report on the server side, and adding the resources:

Add subreport as resource in jasper server repository

Add subreport as resource in jasper server repository

We still have to add the parameters on the server side. Remember we want to use a collection. In order to do that, we use a “Multi Select Query Type” Input Control:

 

Jasper Server Add Input control

Jasper Server Add Input control

We configure it as a “Multi Select Query Type” with the same name we are going to use in our report, that being “ic_testconfig”:

Jasper Server Collection Input Control

Jasper Server Collection Input Control

After refreshing the Repository in IReport, our Report looks like this. Notice the input controls, and how the collection item is now available

Configured repository with subreports

Configured repository with subreports

4. Final

Let’s review the steps once again:

  1. Create the main report, and decide on the parameters you want to pass to the sub-report. Upload the main report to the Jasper Server, and add the parameters on the server side too. The resources always have to bee synchronized
  2. Create the sub-report, and the query that will receive the collection. Upload the sub-report to Jasper Server
  3. Add the sub-report as a resource of the main report in Jasper Server
  4. Watch out the query syntax when using collections:
    1. where $X{IN,tr.testrun_id,ic_testconfig}

I think that’s it. Tried to put it as explanatory as possible, in the short amount of time that i have at disposal these days. I am really sorry for the delay in replying to comments, and posting new content. Hope to get things off my head in the near future

Cheers,

Alex

 

Follow

Get every new post delivered to your Inbox.

Join 44 other followers