Archive

Archive for the ‘JASPER REPORTS’ Category

Jasper Reports – Cascade Parameter Reports in IReport & Jasper Server 3.7.5

February 2, 2011 5 comments

I have long awaited for this functionality in Jasper Server. And it was beautifully served starting Jasper Server 3.7.5

What i needed to do, was to do a cascade query, which would allow me eventually to select a single (or list) of id’s, based on the previous selections. Since i have spent some time integrating this in my reports, i thought it could help others acchieve this also, since it is not really “out of the box”.

I will start by defining the scope of the report, the ER diagram of the tables, and will then dive into configuring the needed resources, both in Jasper Server and IReport.

As i am using Jasper Reports for building performance reports, i am using the following four tables (relevant for this document):

  • info {id,build,portal_version,revision_number) – containing the primary key, software major version name, and the portal version (the hudson build id). I am using the primary key  in a one to many relationship in the following table, where i hold the map between the tests that i ran, and the software version (the revision_number is obsolete, as it is the same with the hudson build id)
  • testrun {build_id,test_id} – containing the map between the build and the tests (one to many)
  • testdescription{test_id,test_date,threads,users,rampup,scenario_id, architecture_id,etc.} – holding test information for each test that has been run (numbe of threads, date, etc., and the component tested, which in this case is represented by (bad naming) “scenario_id
  • testresults {id,testrun_id,t,lt,ts,lb,…etc} – holding the test results imported from JMeter reports

For a better understanding, i have provided the ER Diagram for this four tables:

ER Diagram Performance Reports

ER Diagram Performance Reports

So, what we want to do, is building a report from the testresults, that reside in “testresults” table. For that, we need the “testresults.testrun_id” value, which we want to extract after previously having selected the needed information. We will follow the following steps:

  1. Query and display  the contents of  the table “info“, and select the major version and revision number that we are interested in. We will store the result (only “info.id” ) into a parameter called ic_version, that we will use in the second query for finding all components that have been tested against this software version
  2. Query and display the tested components (scenario_id) from the table “testdescription” based on the chosen info.id, which we now have in the ic_version parameter. We will do a join over the “testrun” table in order to sort that information. We’ll store the result of the selection (only “testdescription.scenario_id“) into a second parameter, called ic_component that we will now use to find the tests that have been run against this component
  3. Query and display all tests from “testrun” that match ic_component criteria ( all tests that have been run against a component, on a specific software version). We will store the result of the selection in a third parameter, called ic_testconfig, that we will pass on to the internal report query

Let’s do this, one by one. We will start by creating the needed resource in Jasper Server.

Server Side Configuration – Jasper Server

  1. Query and display the contents of “info”, store results into a server side parameter
    1. Log in as Jasper Administrator to your Jasper Server instance
    2. Go to “Input Data Types”. Right click > Add resource – Query
    3. Name it “q_version_select” and label it “Query version select”
    4. Choose the server sided Data Source (From the Repository)
    5. Type in your query:
      select id,build,portal_version ,revision_number from info order by id
    6. Now that we have the query, we will need a server sided parameter to store the selection that we are going to do from the result set of this query. We need to create an Input Control on the server
      Go to "Input Data Types". Right click > Add resource > Other - Input Control
    7. Name it "ic_version". Prompt text should be "Choose software version". Type  should be "single select query".Click "next"
    8. Select the query from the repository ( the q_version_select query that we just created in step 1-5)
    9. Select "id" as value column (this will be the value stored in the parameter), and whatever you want for the displayed (visible columns). They will only be displayed to make your selection easier (i chose them all)
  2. Query and display the tested components on the software version, using the parameter "ic_version"
    1. We need to create the second query, which will use the "ic_version" parameter
    2. Go to "Input Data Types". Right click > Add resource - Query
    3. Name it "q_select_component" and label it "Query component select"
    4. Choose the server sided Data Source (From the Repository)
    5. Type in your query:
      select distinct tr.test_id,td.scenario_id,td.architecture_id
      from testrun tr
      inner join info ON tr.build_id = info.id
      inner join testdescription td on td.test_id = tr.test_id
      where info.id=$P{ic_version} group by td.scenario_id order by scenario_id,architecture_id
    6. Now that we have the query, we will need a server sided parameter to store the selection that we are going to do from the result set of this query. We need to create an Input Control on the server
      Go to "Input Data Types". Right click > Add resource > Other - Input Control
    7. Name it "ic_component". Prompt text should be "Choose tested component". Type  should be "single select query".Click "next"
    8. Select the query from the repository ( the q_select_component that we just created in step 1-5)
    9. Select "scenario_id" as value column (this will be the value stored in the parameter), and whatever you want for the displayed (visible columns). They will only be displayed to make your selection easier (i chose scenario_id and architecture_id)
  3. Query and display the test id and test configuration (in case you had different load scenarios for the same test and same component)
    1. We need to create the third query, which will use the "ic_component" parameter
    2. Go to "Input Data Types". Right click > Add resource - Query
    3. Name it "q_select_configuration" and label it "Query test id and configuration"
    4. Choose the server sided Data Source (From the Repository)
    5. Type in your query:
      select
            td.test_id,
            td.scenario_id,
            td.architecture_id,
            td.test_date,
            td.threads,
            td.users
      from
      testdescription td
      inner join
      testrun tr ON td.test_id = tr.test_id
      inner join
      info ON tr.build_id = info.id
      where
      td.scenario_id=$P{ic_component}
            and
      info.id=$P{ic_version}
      order by
            td.architecture_id,td.test_date
    6. Now that we have the query, we will need a server sided parameter to store the selection that we are going to do from the result set of this query. We need to create an Input Control on the server
      Go to "Input Data Types". Right click > Add resource > Other - Input Control
    7. Name it "ic_testconfig". Prompt text should be "Choose test id and configuration". Type  should be "multi select query" (maybe you need to create a report from multiple tests). Click "next"
    8. Select the query from the repository ( the q_select_configuration that we just created in step 1-5)
    9. Select "test_id" as value column (this will be the value stored in the parameter), and whatever you want for the displayed (visible columns). They will only be displayed to make your selection easier (i chose test_id and test_date and threads (virtual users))

We have now reached the final step, where we have stored the selected test id in a parameter called "ic_testconfig". We will use this parameter in the internal report query in IReport. Let's just see how the cascading looks at this point (of course, a report is already created in IReport, but i will detail that below):

First we select the software version

Jasper Reports Cascade Parameters

Second, we select the tested component

 

 

Jasper Reports Cascade Parameters

Last, we select the tests that we want to build the load report from

So, the main idea is, that based on the first selection, the second drop list changes dynamically. The third list changes according to the second selection, and so on. You could have as many drop down lists as you like in your form.

Client Side Configuration - IReport

In order to build a server hosted report, which uses the resources we defined above, we need to synchronize the resources. Therefore, we need to add the server side input controls we just defined to the report. You do that by following these steps:

  1. Open IReport and expand the report you want to use
  2. Select the "Input Controls" subfolder of the report - Right Click > Link an existing input control (we want to use the one on the server, otherwise we would have defined it locally)

    IReport - Link existing input control

    IReport - Link existing input control

  3. Choose datatypes as location for the input controls

    IReport - Add server located input control - Datatypes

    IReport - Add server located input control - Datatypes

  4. Add the three input controls that we defined in the first part, one by one: ic_version, ic_component, ic_testconfig
  5. For each server side input control, we need a local defined parameter with the same name and type. Add the following three parameters in the Parameters Tree:
    • ic_version of type string
    • ic_component of type string
    • ic_testconfig of type collection, with a default expression of: new ArrayList(Arrays.asList(new String[] {"a"}))

    We are now ready to build the local query (the report query), which will use the last selected parameter, ic_testconfig

  6. Define the local query based on ic_testconfig parameter:
    select 
               tr.col1,
               tr.col2,
               etc.
    from
              testresults tr
    where
              $X{IN,tr.testrun_id,ic_testconfig}

As you can see, we use the ic_testconfig parameter that we used in the cascade for defining the test id for the load report.

That's it. Here is a list of things to remember:

  1. Every query should have an input control on the server
  2. Every subsequent query should use the input control from step 1 as sql predicate
  3. Every input control on the server has to be linked in the report as "existing input control"
  4. Every input control defined on the server and linked as existing input control, has also have to be defined as local parameter, of the same type, and with the same name

 

 

Command Line Glassfish Monitoring in Jasper Reports using Glassfish Rest Monitoring

January 31, 2011 8 comments

Recently i have spent some time thinking how i can integrate active monitoring of Glassfish critical resources into my reactive Performance Reports.  I was having a performance report showing me both response times and throughput, but i needed to know why, at some points in time, my throughput was decreasing. There could have been several reasons, like JMS, JDBC, too many open connections , etc… I needed to have those values in my report, so i could easily identify problems without proactively monitoring while testing. So i started thinking about ways to do it.

At first, i wanted to monitor the following resources (there are lots to follow, but now i’ll just stick to these ones, as these ones were the ones i needed in the beginning)

  • jdbc usage – the number of connections used at runtime
  • threads busy – the number of busy threads at runtime
  • open http connections – the number of open connections at runtime
  • keep alive connections – the number of connections in keep-alive mode at runtime
  • http peak queued requests – the peak number of requests that had to wait in a queue before being processed
  • count of specific beans in our software

I also needed to develop a solution that would allow me adding new monitoring statistics later, in a “plug-in” fashion.

Basically, there are three ways (of which i am aware of at least) that you can do it:

  1. Using JMX programatically
    Basically, you use the JMX Api, by registering the MBean server, and then interrogating each managed bean according to your needs. As i prefer shell scripting to developing (sic!), i turned my back to this solution, and started looking into the next one. Those who want to see this in action can check it out here: 

  2. Using a custom built JMX command line monitor
    I needed a tool that could interrogate the MBean server, and return the result into a file, on a timely fashion. It had to be able to be controlled through my shell script, which would control what and when to interrogate. There are several solutions on the market (i am talking about the open source ones) like: 

    • jmxterm – http://wiki.cyclopsgroup.org/jmxterm – I think of all that i have tried, this one was the easiest and most straight-forward to use. The best about it is that you can define commands to be sent into a file, and use the file as command input. That way, if your target is to interrogate the server in a high-pause (let’s say every 5 minutes) fashion, then jmxterm is a pretty good solution.
    • command line jmx client - http://crawler.archive.org/cmdline-jmxclient/ (quite old, not very flexible, only one command at a time
  3. Using the REST Management Interface provided by Glassfish
    Basically, Glassfish provides two interfaces, one for managing resources, the other one for monitoring. 

    The fact that every attribute can be monitored via an HTTP request, makes this the best candidate for my purposes. Let me detail that

Glassfish comes preconfigured with every monitoring level set as low. This means that in order to start monitoring, we need to enable monitoring for specific modules. Let’s do that, one by one, for the modules that we need. You need to call the management interface of Glassfish:

http://yourserver:4848/management/domain  (or directly http://yourserver:4848/management/domain/configs/config/server-config/monitoring-service/module-monitoring-levels for going directly to the monitoring levels page) Set the desired modules to HIGH:

Glassfish Rest Interface - Enabling Monitoring

Glassfish Rest IGlassfish Rest Interface - Enabling Monitoring

The changes are dynamic, you do  not need to restart the server. Let’s check if this worked out. I am going to request the number of active connections, by calling the monitoring interface of the http listener in a browser:

http://yourserver:4848/monitoring/domain/server/network/http-listener-1/keep-alive

The result lookis like this:

Monitoring Glassfish Keep Alive Connections

Monitoring Glassfish Keep Alive Connections

This works for every attribute that you normal query through the MBeans Browser in JConsole or VisualVM. Now, i was speaking about needing a way to control the requests from command line (shell script), the delays, and a way to import and present the data in the same report with my performance report. I needed some way to make the request from the command line, so i turned to wget. Unfortunately, wget cannot append the result into a file, therefore i switched to curl. Therefore, the request that i just sent above, using a browser, can now be sent as a curl request:

curl -s -u admin:adminadmin  http://yourserver:4848/monitoring/domain/server/network/http-listener-1/keep-alive

Now, this is the last level of granularity that you will obtain. This request will provide you with all keep alive monitoring statistics:

  • countconnections
  • counttimeouts
  • secondstimeouts
  • maxrequests
  • countflushes
  • counthits
  • countrefusals

Now, if we are only interested in the number of keep alive connections, we need to extract that from the answer. Nothing easier, when using regular expressions. Let’s do that for “countconnections’. We will store the result into a variable, using shell scripting:

HTTP_KEEP_ALIVE_CONNECTIONS=`curl -s -u admin:adminadmin  http://yourserver:4848/monitoring/domain/server/network/http-listener-1/keep-alive  | grep countconnections | grep -o -E ‘”count”:[0-9]*’ | sed ‘s/["]*[a-z]*["][:]*//’`

This will return the value of the keep alive connections parameter alone. If we wanted to check on the number of http connections we would use:

HTTP_CONNECTIONS_OPEN=`curl -s -u admin:adminadmin http://yourserver:4848/monitoring/domain/server/network/http-listener-1/connection-queue | grep countopenconnections | grep -o -E ‘”count”:[0-9]*’ | sed ‘s/["]*[a-z]*["][:]*//’`

That gives you so much flexibility, doesn’t it? Let’s just collect all this information on a time basis, using a function. We will collect statistics every x seconds, as long as the monitoring process is enabled. We enable the monitoring process by creating a temporary file called “/tmp/glassfish_stats”. The idea behind this is to start monitoring when we start the load test, and stop monitoring once the last request of the load test has been sent (when we would then remove the /tmp/glassfish_stats file, therefore stopping the monitoring process

function trace_gf_statistics
{
CMD_PARAM_PROTOCOL=$1
CMD_PARAM_SERVER=$2
CMD_PARAM_PORT=$3
CMD_PARAM_INTERVAL=$4

#Check on temporary file. If it exists, stop monitoring. Otherwise, monitor every x defined seconds
status=`ls /tmp | grep glassfish_stats`

while [ "$status" != "" ];
do

 MONITOR_TIMESTAMP=`date +%H-%M-%S`

 #JDBC Monitoring
 JDBC_CONN_USED=`curl -s -u admin:adminadmin http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/resources/
mypool| grep numconnused |
grep -o -E '"current":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

 #Thread Pool Monitoring
 HTTP_THREAD_POOL_THREAD_COUNT=`curl -s -u admin:adminadmin http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/network/
http-listener-1/thread-pool | grep currentthreadsbusy |
grep -o -E '"count":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

 #Keep Alive Connections
 HTTP_KEEP_ALIVE_CONNECTIONS=`curl -s -u admin:adminadmin  http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/network/
http-listener-1/keep-alive  | grep countconnections |
grep -o -E '"count":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

 #Open connections

 HTTP_CONNECTIONS_OPEN=`curl -s -u admin:adminadmin http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/network/
http-listener-1/connection-queue | grep countopenconnections |
grep -o -E '"count":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

 HTTP_CONNECTIONS_PEAK_QUEUED=`curl -s -u admin:adminadmin http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/network/
http-listener-1/connection-queue | grep peakqueued |
grep -o -E '"count":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

 #Bean Monitoring

 BEANMON_MYTESTBEAN_CURRENT=`curl -s -u admin:adminadmin  http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/
applications/myapplication/mytestbean/bean-cache  |
grep numbeansincache | grep -o -E '"current":[0-9]*' |
sed 's/["]*[a-z]*["][:]*//'`

 BEANMON_MYTESTBEAN_PEAK=`curl -s -u admin:adminadmin  http://
$CMD_PARAM_SERVER:$CMD_PARAM_PORT/monitoring/domain/server/
applications/myapplication/mytestbean/bean-cache  |
grep numbeansincache | grep -o -E '"highwatermark":[0-9]*' |
sed 's/["]*[a-z]*["][:]*//'`

 echo $MONITOR_TIMESTAMP":JDBC   - Connections used:
"$JDBC_CONN_USED >> ${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":HTTP   - Thread Usage:
"$HTTP_THREAD_POOL_THREAD_COUNT >> ${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":HTTP   - Keep Alive Connections:
"$HTTP_KEEP_ALIVE_CONNECTIONS >> ${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":HTTP   - Open Connections:
"$HTTP_CONNECTIONS_OPEN >> /${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":HTTP   - Peak Queued Connections:
"$HTTP_CONNECTIONS_PEAK_QUEUED >> /${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":Beans  - Mytestbean Current:
"$BEANMON_MYTESTBEAN_CURRENT >> /${JMETER_RESULTS}/glassfish_stats.log

 echo $MONITOR_TIMESTAMP":Beans  - Mytestbean Peak:
"$BEANMON_MYTESTBEAN_PEAK >> /${JMETER_RESULTS}/glassfish_stats.log

 sleep $CMD_PARAM_INTERVAL
 status=`ls /tmp | grep glassfish_stats`
done
cp ${JMETER_RESULTS}/glassfish_stats.log ${JMETER_TRANSFORMATION}glassfish_stats.log
}

So basically, i am querying the resources every x seconds, appending them to an export file, that i will use in the end for importing and transforming. As one can see, i use a ":" delimited file, where i export the following:

  • timestamp
  • monitored resource
  • value

In the end, it looks something like this:

16-07-05:JDBC   - Connections used:0
16-07-05:HTTP   - Thread Usage:0
16-07-05:HTTP   - Keep Alive Connections:2
16-07-05:HTTP   - Open Connections:16783
16-07-05:HTTP   - Peak Queued Connections:51
16-07-05:Beans  - Mytestbean Current:0
16-07-05:Beans  - Mytestbean Peak:177

Now, all i have to do is to import the results into the database, and prepare the report. And this is how it looks in the end:

Monitoring Glassfish JDBC - Connections used

Monitoring Glassfish JDBC - Connections used

And now bean monitoring:

Glassfish Bean Monitoring

Glassfish Bean Monitoring

And this is how the final report looks like (sorry, i will get back with a clear view of it. Haven't had the time to adapt this one to paper size yet, so i only have it in extended format )

Jasper Report with Glassfish Monitoring

Jasper Report with Glassfish Monitoring

The best part in this, is that if you need to monitor a new resources, it will all come to two things:

  1. Setting the variable and the curl request
  2. Adding the result to the export file

The structure of the database, organised as (test_id,timestamp,label,value) will take this on the fly, regardless of the number of monitored resources. You can add as many subreports as you want, monitoring mainly everything that you need. The timestamps will help you check what happened at a specific point in time (for example when the throughput decreased...)

This kicks the hell out of commercial tools, doesn't it ?

Good luck with setting your monitoring environment, and let me know if you encounter any problems.

cheers,

Alex

Follow

Get every new post delivered to your Inbox.

Join 29 other followers