Home > Apache, GLASSFISH / J2EE / JAVA PERFORMANCE, unix > Performance Tuning for Apache Worker Model with Glassfish Application Server

Performance Tuning for Apache Worker Model with Glassfish Application Server


There is a lot of material and information on the subject of how to realize the two tier architecture, with the Apache Web Server in front of several application servers, like Tomcat, JBoss, Glassfish and so on.

On the other hand, although the information is there, the number of configuration and fine tuning possibilities are so high, that it takes quite a lot of try and err to sort, understand and implement them.

The aim of this post is to try to take not all, but maybe some hidden, or not so detailed variables in consideration for configuring the web and application server not only for normal workload, but for some extra load, and even for some spikes.

Online businesses, based on high transactional processing systems, need to be able to serve users fast, safe and reliable. It is not enough to be prepared for the normal workload. Real users will always behave differently and, with no exception, unpredictable to a level sometime hard to believe.

The posting deals with the following aspects:

  1. The concept of backward tuning
    This is the starting point for determining the system capacity that we need to handle with the two dimensional web server – application server architecture. We will define key metrics like throughput, memory footprint, maximum number of users, etc. Although the concept is further applicable to the third tier component, the database, this is not part of this post
  2. Building the Apache Workload Model
  3. Apache configuration and tuning using the worker model
  4. Glassfish configuration and tuning for the Workers Model

1. BACKWARD TUNING

The concept is quite simple and straightforward:
  1. First determine how much and at what costs your last tier component – the application server – can handle.
  2. Build your user model based on throughput and estimations, and limit any more than that entering your system.
  3. Synchronize the first tier component – the web server – with the application server so that only the designated number of users can use the system
A typical strategy would look like this:
  • Start by determining the maximum capacity and throughput of your application server
  • Determine how many concurrent users can you handle on your application server, and what is their memory footprint (memory used by one user session for the duration of using your solution)
  • Determine the number of connections that you need to maintain open on the web server side in order to accommodate all users above
  • Tune the corresponding thread pools (web server and application server) accordingly, so that only a determined number of connections can be opened on the web server, and that only a determined number of users can make it to the application server
  • Determine the average number of business requests a user will execute on your system
  • Tune the backlog of the web server so that you can prepare for spikes, and that you can determine when to send the surplus users to another web server or waiting server
  • Tune the operating systems  for handling the configured number of connections – network, memory, open files, etc.

By now, you should have gone through the process of  load and performance testing of your application. After this step, you should be in the position of knowing what resources you would need to reach maximum throughput. You should therefore also have configured and tuned your HEAP and decided on the Garbage Collection strategy. These are critical aspects that are derived after testing your application in an isolated one tier architecture, the application server.

These are the statistics that we need:

  • Number of transactions / second with a CPU load of max 80 % ( we need to leave an additional 20 % for workload spikes) that your application server can process
  • Average think time between transactions how do users interact with your application? What is the average think time before firing the next transaction. Together with the “number of transactions / second” this will give you the maximum number of “business users” / application server
  • Transaction Distribution Model: percentage of users executing business transactions vs “just visitors”, users just surfing your web page. How many of your visitors are just surfing, and how many are calling business requests? The distribution model will tell you how many resources you will need, both on the web server and the application server
  • User Memory Footprint: how much heap memory does a user executing transactions (to be called registered user) occupy vs a user just visiting the page – in other words, business session vs anonymous session. Depending on this information, you allow a maximum number of users to enter your system
  • Transaction model: how are the business transactions being executed, how many HTTP connections are being opened for executing business transactions. Is your application an interactive application or a static resource intensive application? Typical browsers these days try to serve the content to the users as soon as possible, therefore opening several persistent connections to the web server at once (Firefox for example opens between 6 and 15 connections) How many connections are you using when triggering a business request? 1,2 or all 6 of them?
Like already mentioned, the purpose is to determine what is the maximum throughput, and never allow more. Once you know this information, you can limit the number of users entering your system directly in the web server tier.
There are two things that you do not want to see your users experiencing:
  1. Long and very long loading and response times – nowadays any webpage loading in more than 3 to 5 seconds is annoying. Dirk, an ex colleague of mine was telling me that an increase of 0.1 seconds in the loading time of Amazon costs a fortune…
  2. White pages and never loading page – This is even worse. A web page loading very slowly is still a loading web page. Yet, any new user landing on your product page, and experiencing a white page as his first experience will unlikely come again very soon.
Once again, the purpose is to limit the number of users entering the complete system, and to allow a specific number of users to wait. Take this example:
It’s 19:00 in the evening, and you have a store operating until 20:00. Your store is currently full, and you know you can only server another 10 clients before closing the shop. You will then allow your client ticket machine to give no more than 10 additional waiting tickets. Any other client will have to use another store, or return tomorrow.
Any workload more than one server can handle can be scaled to a second server, a third server, and so on. Any additional user that can not get a connection in the web server, can be either placed in a queue (the socket backlog) or can be redirected to another free server, or, if there is none free, to a wait server.
We do not want any other inconsistent states. Any request can therefore:
  • be processed immediately
  • wait for a free working thread, and be processed with an acceptable delay
  • be redirected to another web server
  • be redirected to a wait server
Let’s take an example to get a better look and understanding
TUNING STRATEGY EXAMPLE
After examining the object map for both registered and unregistered users, we determine the following statistics:
Application Server User Memory footprint
  • memory required / registered user: 1.5 MB
  • memory required / unregistered user (simple visitor): 0.5 MB
Application Server Throughput
The system can process number of 600 business transactions / second with the current hardware configuration, before the CPU goes beyond 80% usage, and starts spending more time in context switching and internal management, than satisfying requests
Application Server Workload model
  • A business transaction is executed by a user every approximately 3 seconds
  • The average response time for a transaction is between half a second and a second
  • Worse case scenario (is a strategy I often use): x users / (3 seconds think time + 1 second response time) = 600 transactions  =>> x = 2400 concurrent “business” users (users executing business transactions)
  • Light transactions (like users just visiting your webpage) are executed in a rate of 30 users / seconds, so that after one minute, you will have 60 seconds * 30 users = 1800 anonymous users
We just determined a number of users for which we want to tune our system. We have decided on:
  • 1800 anonymous users, needing 1800*0.5 MB= 900 MB
  • 2400 registered users, needing 2700*1.5 MB = 3600 MB
Application Server HEAP structuring
We just determined that we need about 5 GB of JVM Heap space for accommodating all the expected users. It is up to you to decide the allocation of memory in old generation, young generation, edens, the tenuring distribution, the garbage collection strategy and parameters and so on. One thing is clear: you will need at least the 5GB space in the old generation + at least 50 % more ( the CMS for example starts collecting the old generation once the old generation reaches approximately 68 % occupied space , so you might want to allocate at least 50 % more than what you will need to accommodate your users, so that you do not land in a continuous garbage collection)
Application Server Transaction Model
We have defined two types of transactions:
  • business transactions – executing some work in the database, persisting/modifying specific information, executing application logic with a result that is to be persisted
  • light transactions – users just surfing on your page, not actually producing any workload on your database
Let us suppose that after examining your application, you know by now that:
  • business transactions are being processed using one single persistent connection ( in other words, keep alive connection). That means that several business transactions of the same user will use the one and same keep alive connection (of course, if such thing as keep alive is configured and activated)
  • light transactions are being processed using between 6 and 15 persistent connections (depending on the browser, FF uses for example between 6 and 15)
Knowing the required number of connections / type of transaction, we can now determine the total number of connections needed:
  • total number of connections needed: 2400 registered users * 1 persistent connection + 1800 anonymous users * 6 persistent connections =13200 connections on the web server.
Let’s stop a moment and see what this means:
If we want to accommodate 4100 users (2700 + 1800) on the web server, and each of them would require between 1 and 6 connections, we would need between 4100 and 24 600 connections, which is already a relatively high number of connections.
Yet, based on the workload model, we determined the average number of connections needed on the web server.
This is the point where we start configuring the web and application server for this workload, because apache allocates a thread to each connection, and considers each thread to be a connection.
So let’s tune the Apache Web Server to allow about 10 000 connections!

2. Building the Apache Web Server Model

I strongly recommend using the worker model, because of several considerations, of which i point the following as important:
  1. It costs less to spawn a large number of threads than to spawn new processes, as in the prefork model
  2. The memory is shared between all threads belonging to a process
  3. The number of threads can be dynamically increased using graceful restart of the web server
  4. The number of threads used in the application server is somehow compressed, meaning that for each 2, 3, 4 apache threads, there will be one application server thread doing the application business. The application server thread will be at some point shared between web server threads. This way you can define a larger number of apache threads, with a lower number of application server threads
There are definitely other aspects that are less or more important, these are just some of the reasons why to choose the worker model.
In the worker model, APACHE attaches a thread to each connection. This means that one request, using 6 persistent connections, will need 6 APACHE threads.
Let’s take the following example as an exercise of imagination. Suppose you configured APACHE to have a maximum number of threads of 12, and a backlog of 6.
With a request opening 6 connections, you will be able to accommodate no more than 2 users concurrently. A third user will land in the backlog. A forth user will receive a connect error. This would look like this:
Apache Worker Model - Request Workflow

Apache Worker Model - Request Workflow

We know that the first two users will get their connections opened, respectively threads assigned. The question is now:
  • how long will these users hang on this connection?
  • what if the user sends one request, and than does not send any additional request? When will the connection be closed and the thread returned to the pool?
  • what if the user opens the connection and regularly sends requests forever, keeping the connection open forever (Denial of service)
All these aspects are covered by the KeepAlive configurable parameters, like:
  • KeepAlive (do we allow persistent connections at all)
  • KeepAliveTimeout (how long should we wait on the client sending an additional request over the connection before closing the connection and returning the thread to the thread pool)
  • MaxKeepAliveRequests (how many requests is a client allowed to send over one persistent connection before the server closes the connection and returns the thread to the thread pool)
I find one factor of crucial importance. This is the KeepAliveTimeout. This dictates how long should a persistent connection stay exclusively open on the web server, waiting for another request to be sent over the connection, before the connection is terminated. But why is this so important?
Let’s assume we have a marketing campaign, which results in a workload of 30 users / second, each loading the page with 6 persistent connections (firefox default) Let us also assume we have a configured keep-alive timeout of 60 seconds. That means:
  • If one user sends one request over a persistent connection, and then waits 59 seconds before sending the next request, he will reuse the same connection, and no other connection (apache thread) will be created. But:
  • If one user sends a request over a persistent connection, and then waits longer than 60 seconds before sending the next one, or even worse, will never send an additional request, the connection will be kept open with no reason, and will be dropped without serving any more than one request. Not that optimal, is it?

Using this configuration and a time-scaled representation, after 60 seconds you would have the following scenario:

30 users * 6 connections * 60 seconds = 10800 connections
Apache Connections and Keep Alive

Apache Connections and Keep Alive

Of course you cannot keep all your connections open forever, and certainly you cannot keep 6 connections / client open forever. This is why it is so important to determine the workload distribution model. This defines how your users use the web server resources (number of connections)
The critical question here is: ” How many percent of your users are executing business requests (1 connection) and how many are just surfing (6 persistent connections) ?”
Starting with this, and taking also the average think time of your users in consideration, let’s try to redesign the concept above with a keep-alive timeout of 10 seconds, and an average response time for the loading of the web page of aprox. 6 seconds. We will consider that 60 % of all users loading the web page will continue executing business requests, and 40 % will continue just surfing.
With the 30 users workload that we defined above, we will have the following distribution model:
  • 60 % (18 users) will continue just executing business requests, needing one single persistent connection / user
  • 40 % (12 users) will continue just surfing, needing 6 persistent connection / user
This means, that after loading the page, 60 % of the users will continue on one single persistent connection. But hey, what happens to the other 5 persistent connections?
Well, since no additional request is being sent over them, they will be dropped after the keep alive timeout interval, which in this case is set to 10. So, after loading the web page (6 seconds), and after expiry of the keep alive timeout (10 seconds after the last request), the connections will be dropped:
6 seconds loading time + 10 seconds keep alive timeout = 16 seconds
That means that the first connections to be dropped are the ones at second 16. Starting that point, in the workload model defined above (30 new users per second) we will have a number of dropped connections / second of:
5 connections * 18 users = 90 connections.
This is how it will look in the same timescale:
Apache Connections Keep Alive

Apache Connections Keep Alive

Compared to the first model, where we had to maintain 10 800 connections, you now have to only maintain 6840 connections!

But, one target was to be also prepared for spikes. What if we have more users / second than 30 ? What do we do with them, how do we react to that, and how do we prevent the scenarios described in the beginning (white page, long waiting times, etc.)

Well, since we know we can accommodate 1800 users, using the 6840 connections, we can prepare for an extra 50 % workload. That would mean we would get to about 10 000 connections (6840 + 3420) I will just use 10 000 for simplicity.

Right. But…what will happen with the connection request 10001 ?

Since we know that we have a loading time of about 6 seconds, no user should wait longer than 3 seconds to get a connection (so that the total response time still falls under 10 seconds)

With a workload of 30 users / second, we can accommodate:

( 30 users * 6 connections * 3 seconds ) requests in 3 seconds= 540 requests

But we just spoke of spikes, so let us increase the backlog with 50 % to  810 connections.

That means:

  • First 10000 requests will get their threads
  • requests between 10001 and 10810 will wait in the backlog for a free connection
  • request 10811 will be either sent to a wait server or to another scaled web server
This should all look as follows:
Tuning Apache Workers for Load - The connection and backlog model

Tuning Apache Workers for Load - The connection and backlog model

Now that we decided on what we want to handle, let’s get our servers ready for this.

3. Configuring and tuning APACHE using Workers for performance and high number of connections

We need to take care both of APACHE configuration, and of configuration of the operating system where Apache resides

APACHE WebServer Configuration

We are ready to configure APACHE for 10 000  connections. Since it is more efficient to spawn threads than processes, let’s use a higher number of threads per process.

This is how a configuration could look like. With this configuration we will:

  • start with 25 servers, each spawning 200 threads, resulting 5000 threads
  • use about 50 k / thread, resulting in about 500 MB reserved memory
  • expand to maximum 50 servers, each spawning 200 threads, resulting 10000 threads. This is also the value used by APACHE as MaxClients (maximum number of connections)
  • start shrinking the number of threads once the load has decreased, and keep no less than 7500 threads in the pool
  • allow no more than 100 000 requests / process, which results in an average of 500 requests / child thread. We use this in order to avoid nasty users holding to the process forever, and we also avoid memory leaks, by destroying the process after the number of requests has been reached
<IfModule worker.c>
# initial number of server processes to start
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#startservers
StartServers        25
# highest possible MaxClients setting for the lifetime of the Apache process.
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#serverlimit
ServerLimit         50
# minimum number of worker threads which are kept spare
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#minsparethreads
MinSpareThreads     1000
# maximum number of worker threads which are kept spare
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxsparethreads
MaxSpareThreads    7500
# upper limit on the configurable number of threads per child process
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#threadlimit
ThreadLimit        200
# maximum number of simultaneous client connections
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
MaxClients         10000
# number of worker threads created by each child process
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#threadsperchild
ThreadsPerChild     200
# maximum number of requests a server process serves
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild
MaxRequestsPerChild  150
</IfModule>
KeepAlive On
MaxKeepAliveRequests 1000
KeepAliveTimeout 10
ListenBackLog 810

A couple of things to consider:

        1. MaxClients: If the allocated MaxClients is higher than ServerLimit * ThreadLimit Apache will automatically reduce MaxClients to the value of ServerLimit * ThreadLimit
          In this case, you will receive the following message:

          WARNING: MaxClients of 10000 would require 200 servers,
           and would exceed the ServerLimit value of 100.
           Automatically lowering MaxClients to 5000.  To increase,
           please see the ServerLimit directive.
        2. Apache Memory / Process and Threads: Check if you have enough memory to handle the configured number of threads. Use the following scripts to monitor the Apache Servers, the number of threads, and the memory reserved:
          • The formula used for configuring your maximum number of clients is:

            maxclients = total ram / ram per process
          • List all Apache processes and number of threads per process

            for pid in `ps U wwwrun | grep httpd | grep -v grep | awk '{ print $1 }'`;
            do echo Apache Worker Server $pid has `ps ms -p $pid | wc -l` threads;
            done

            This will output:

            Apache Worker Server 7528 has 3 threads
            Apache Worker Server 7583 has 3 threads
            Apache Worker Server 7587 has 3 threads
            Apache Worker Server 7596 has 204 threads
            Apache Worker Server 7601 has 204 threads
            Apache Worker Server 7610 has 204 threads
            Apache Worker Server 7618 has 204 threads
            Apache Worker Server 7628 has 204 threads
            Apache Worker Server 7640 has 204 threads
            Apache Worker Server 7651 has 204 threads
            Apache Worker Server 7668 has 204 threads
            ............
          • List the reserved memory per APACHE Server and threads

            servers=0;threads=0;space=0;
            for pid in `ps U wwwrun | grep httpd | grep -v grep | awk '{ print $1 }'`;
            do process_threads=`ps ms -p $pid | wc -l`;
            process_memory=`ps -ylC -p $pid | grep -v PID | awk '{ print $8}'`;
            echo Apache Worker Server with pid $pid has $process_threads threads occupying $process_memory bytes;
            servers=`expr $servers + 1`;threads=`expr $process_threads + $threads`;space=`expr $process_memory + $space`; done;
            echo "-------------"; echo Total Apache Servers: $servers \| Total threads: $threads \| Total memory reserved: $space

            This will output:

            .......
            Apache Worker Server with pid 1733 has 204 threads occupying 34644 bytes
            Apache Worker Server with pid 1759 has 204 threads occupying 34612 bytes
            Apache Worker Server with pid 1788 has 204 threads occupying 34620 bytes
            Apache Worker Server with pid 1816 has 204 threads occupying 34648 bytes
            Apache Worker Server with pid 1844 has 204 threads occupying 34648 bytes
            Apache Worker Server with pid 1875 has 204 threads occupying 34648 bytes
            Apache Worker Server with pid 1907 has 204 threads occupying 34684 bytes
            Apache Worker Server with pid 1940 has 204 threads occupying 34676 bytes
            -------------
            Total Apache Servers: 26 | Total threads: 5103 | Total memory reserved: 885272
        3. ThreadsPerchild vs ThreadLimit: There is a difference between ThreadsPerchild and ThreadLimit. ThreadsPerChild defines the initial number of threads spawned per worker process. Defining a larger number as ThreadLimit allows you to modify the ThreadsPerChild up to ThreadLimit, without needing to do a hard restart of Apache. A graceful restart will suffice.For example:
          ThreadsPerChild 100
          ThreadLimit 500

          This will start the apache processes with 100 threads each. If you feel that your webserver cannot handle the current load with the current configuration, you just need to increase the ThreadsPerChild, (in this case to maximum  500) and execute a graceful restart:

          ThreadsPerChild 300
          ThreadLimit 500

          Yet, it is very important to remember that:

          1. ThreadLimit will allocate memory in advance, so check in advance if you can handle the ServerLimit * ThreadsPerChild with your current RAM
          2. If you increase the ThreadsPerChild up to a value that the APACHE Server will consider it cannot allocate the needed memory, APACHE will exit!
        4. ServerLimit vs ThreadLimit: You cannot increase the maximum number of server processes ServerLimit without hard restart of APACHE, so take this into consideration when planning the baseline configuration. If you want to be prepared for a higher load, configure the ThreadLimit higher, which will allow you to increase the capacity of your server with a graceful restart
        5. Number of Apache Threads spawned: Make sure that the number of threads spawned is the one you expect. Use the following to check the number of threads spawned.”Sl” is for Interruptable sleep, multithreaded
          ps axsm | grep -c 'Sl'
        6. Denial of Service and Keep Alive Parameter: Be prepared for Denial of Service Attacks. Do not allow any user to hold on to a persistent connection forever. Decide on the average number of transactions that a user will execute while using your system, and add it as parameter:
          KeepAlive On
          MaxKeepAliveRequests 1000
          KeepAliveTimeout 10
        7. Backlog for extra spikes:Be prepared for those extra spikes, where all your threads are busy, and you need a “waiting room” for your users. Use the ListenBackLog Directive, which we configured to 810:
          ListenBackLog 810
        8. ONE OF THE MOST IMPORTANT:
          The only way you can control the life of an APACHE server, and its children, is using this two parameters. Set them too high, and they will never die, therefore, the number of Threads in your Application Server will increase continously, eventually reaching the maximum configured number of threads, where it will stop servicing requests.
          Now, there are two parameters that control the behaviour of the apache servers and their children, and therefore the behaviour of the thread pool on the application server side. MaxRequestsPerChild refers to the maximum number of NEW requests (meaning a new client, opening a completely new Keep Alive Connection on this child) that can be sent over one Apache child. If for example, you had one client, opening a connection, and sending 500 requests, one after the other, on the same keep alive connection, than this parameter would have the value 1 (only the first request over a new keep alive connection will be counted) On the other hand, you do not want one client to hold on forever on his connection (remember the DOS), so you control that by setting the maximum, total number of requests a client can send over a SINGLE keep alive connection. When one of the two happens, the client will be assigned a new child thread, and the thread having served will be marked as “dead”

          MaxRequestsPerChild  150
          MaxKeepAliveRequests 1000
          
          
The behaviour that you expect on the Glassfish side is an up/down one, where threads are being dropped and created over time.Not setting those values, you would have an increasing trend, up to the configured maximum number of threads.
This is how the Glassfish Thread Pool looked like after implementing this, monitored over a period of 1 day:
Glassfish MOD JK Thread Pool Monitoring 24 hours
And this is how the Glassfish Thread Pool looked like, monitored over a period of 5 days!
Glassfish Mod JK Thread Pool Monitoring - 5 days

APACHE Operating System Configuration

Apache Tuning – Max Open Files

Since every socket in Unix is actually a file, we need to tune the maximum allowed number of open files.

This specifies the number of open files that are supported.
The default setting is typically sufficient for most applications. 
If the value set for this parameter is too low, a file open error,
memory allocation failure, or connection establishment error might
be displayed.

More on this here: IBM Websphere Linux Tuning

Apache Tuning – TCP Settings

Keep in mind the seven layer architecture (also known as OSI model) Both Glassfish and Apache control the transport layer, which sits further on the network layer.

OSI Model - Seven layer architecture

OSI Model - Seven layer architecture

Image courtesy: Novell - http://www.novell.com/info/primer/prim05.html

You can configure and limit GLASSFISH and APACHE resources, as long as you do not exceed the network layer settings. That is why we need to take that also into consideration, and tune them accordingly, so that we can plug in the Glassfish or Apache without exceeding any configured resources

Apache Tuning – Backlog and Maximum Connections

Change the following parameters when you prepare for a high rate of incoming connections. Keep in mind that this setting is shared between all Apache Servers, so that if you want to host several web servers, you need to be prepared to accommodate the connections for all servers:

    • Number of connections in backlog: cat /proc/sys/net/core/netdev_max_backlog Default is 1000, modify this according to the maximum expected number of connections that you want to accommodate in your backlog (keep them waiting for a connection)
    • Size of the listen queue, in bytes, for accepting new tcp connections: cat /proc/sys/net/core/somaxconn Default is 128, modify this according to the size of the backlog you would need for the maximum expected number of connections that you want to accommodate in your backlog
Apache Tuning – Operating System – Keep Alive Settings

Although Unix has built-in support for KeepAlive, this is not the default behavior in Linux. Programs must request keepalive control for their sockets using the setsockoptinterface. http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.htmlThere are three “tunable” parameters:

tcp_keepalive_time
the interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
tcp_keepalive_intvl
the interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
tcp_keepalive_probes
the number of unacknowledged probes to send before considering the connection dead and notifying the application layer

As long as the keep alive is high enough (higher than your configured Apache Server) you should not worry, since Apache will take care of managing the connections. Just make sure that the settings are higher than the ones configured in your web and application server. You can check all three by issuing the following command (i have also listed the default values for SUSE Linux Enterprise Server 11 ):

  • cat /proc/sys/net/ipv4/tcp_keepalive_intvl  : 75
  • cat /proc/sys/net/ipv4/tcp_keepalive_probes: 9
  • cat /proc/sys/net/ipv4/tcp_keepalive_time: 7200 (seconds)
Apache Tuning – KEEP ALIVE SETTINGS

As defined above, these are the three configuration parameters you’d have to take care of when configuring for keep-alive connections (repeated, see above, chapter 2):

    • KeepAlive (do we allow persistent connections at all)
    • KeepAliveTimeout (how long should we wait on the client sending an additional request over the connection before closing the connection and returning the thread to the thread pool)
    • MaxKeepAliveRequests (how many requests is a client allowed to send over one persistent connection before the server closes the connection and returns the thread to the thread pool)
Apache Tuning – Operating System – CONNECTION MANAGEMENT

tcp_fin_timeout:

This basically holds a connection in a   TIME_WAIT mode (after the keep alive timeout expired), waiting for the same client to reinitiate the communication. Basically it says that it is cheaper to reactivate a sleeping connection, than to build a new one. Maintaining a connection open, after the user has not sent any additional request in the KeepAliveTimeout timeout defined may be expensive, since other users may just well sit in the backlog, waiting for exactly that one connection to be freed. If this is the case, you should set this to a level as low as possible, so that after a small timeout, the connection can be finally destroyed, and rebuilt with another id, for another user.

“This determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or twice the maximum segment lifetime (2MSL) state. During this time, reopening the connection to the client and server cost less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, providing more resources for new connections. Adjust this parameter if the running application requires rapid release, the creation of new connections, and a low throughput due to many connections sitting in the TIME_WAIT state.”

More on this here: IBM WebSphere Tuning

A simple way to see how many connections you have in TIME_WAIT is using netstat:

  • watch --interval=1 "netstat | grep -c TIME_WAIT"

You can view and set this parameter as follows:

  • View: cat /proc/sys/net/ipv4/tcp_fin_timeout : 60 (default on SUSE Linux Enterprise Server 11) 
  • Set: echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout
Apache Tuning – Compression

Most modern browsers can handle compressed content, decompressing it upon receiving it. It is a great idea to compress content, which will save you a lot of bandwidth. Of course, this comes with a price – CPU is needed for compressing the files. Since you do not want your application server to use resources for compression, leave it to the web server. And since Apache has caching mechanisms, this should not be a problem, since the file will be zipped once, and then saved and served in/from cache

Most compression algorithms, when applied to a plain-text file, can reduce its size by 70% or more, depending on the content in the file. When using compression algorithms, the difference between standard and maximum compression levels is small, especially when you consider the extra CPU time necessary to process these extra compression passes. This is quite important when dynamically compressing Web content. Most software content compression techniques use a compression level of 6 (out of 9 levels) to conserve CPU cycles. The file size difference between level 6 and level 9 is usually so small as to be not worth the extra time involved.

http://www.linuxjournal.com/article/6802

There are two modules for using compression in Apache:

    • mod_gzip
    • mod_deflate
Just be aware of the compression directives:
    • type of files you need to compress (it does not make sense compressing already compressed files like pdf, jpg, etc.): AddOutputFilterByType DEFLATE text/html text/plain text/xml
Apache Tuning – Caching

Related to the point above, you can benefit of both compressing and caching content. You can either use a disk based cache store manager or a faster, memory based store manager

mod_cache implements an RFC 2616 compliant HTTP content cache that can be used to cache either local or proxied content. mod_cache requires the services of one or more storage management modules. Two storage management modules are included in the base Apache distribution:
mod_disk_cache

implements a disk based storage manager.

mod_mem_cache

implements a memory based storage manager. mod_mem_cache can be configured to operate in two modes: caching open file descriptors or caching objects in heap storage. mod_mem_cache can be used to cache locally generated content or to cache backend server content for mod_proxy when configured using ProxyPass (aka reverse proxy)

More on this here: http://httpd.apache.org/docs/2.2/mod/mod_cache.html

4. Configuring and tuning the Glassfish Application Server to work with the Workers Model

One thing you need to understand is that in the worker model, APACHE controls the lifecycle of working threads in Glassfish. Therefore, we need to configure Glassfish knowing that APACHE takes care of things like:

    • thread removal
    • keep alive timeouts
    • maximum number of requests / connection
    • compression
    • etc

Most UNIX distributions come with an out of the box configuration, that is prepared for normal workload, but definitely not for performance. We need to be aware of network, memory and other system resource settings that we have to reconfigure in order to gain and prepare for performance.

Glassfish Tuning

Glassfish Tuning – Max Open Files

Once again, since every socket in Unix is actually a file, we need to tune the maximum allowed number of open files.

You can of course set it to unlimited, but since we spoke of a APACHE Thread Pool -> Glassfish Thread Pool compression, i would set this to half the number of threads apache can spawn + extra 5000 for all other processes

ulimit -n 10000

Yet, be aware that if you have multiple Glassfish Domains Application Server, they will all share the number of open files. Keep this in mind when calculating the number of possible open threads / server.

Glassfish Tuning – Max Threads

Since your system can perform up to about 600 transactions / second, you should configure your maximum thread pool according to this number. Allow about 50 % more threads for those extra spikes, even if that comes with longer response times.

I would therefore set the maximum thread pool size to about (rounded) 1000:

<thread-pool max-thread-pool-size="1024" name="http-thread-pool" />

Glassfish Tuning – TCP Settings

Keep in mind the seven layer architecture (also known as OSI model) Both Glassfish and Apache control the transport layer, which sits further on the network layer (see above, in chapter 3, Apache Tuning – TCP Settings)

You can configure and limit GLASSFISH and APACHE resources, as long as you do not exceed the network layer settings. That is why we need to take that also into consideration, and tune them accordingly, so that we can plug in the Glassfish or Apache without exceeding any configured resources

Glassfish Tuning – Backlog and Maximum Connections

Change the following parameters when you prepare for a high rate of incoming connections. Keep in mind that this setting is shared between all Glassfish Servers respectively Apache Servers:

  • Number of connections in backlog:
    /proc/sys/net/core/netdev_max_backlog

    Default is 1000, modify this according to the maximum expected number of connections that you want to accommodate in your backlog (keep them waiting for a connection). Remember that these connections are the ones opened by APACHE, so you may want to tune this according to the maximum number of threads you will allow Apache to start inside the application server
  • Size of the listen queue, in bytes, for accepting new tcp connections:
    /proc/sys/net/core/somaxconn
    Default is 128, modify this according to the size of the backlog you would need for the maximum expected number of connections that you want to accommodate in your backlog
Glassfish Tuning – Operating System – Keep Alive Settings

Although Unix has built-in support for KeepAlive, this is not the default behavior in Linux. Programs must request keepalive control for their sockets using the setsockoptinterface.

http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html

There are three “tunable” parameters:

tcp_keepalive_time
the interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
tcp_keepalive_intvl
the interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
tcp_keepalive_probes
the number of unacknowledged probes to send before considering the connection dead and notifying the application layer

As long as the keep alive is high enough (higher than your configured Glassfish or Apache Server) you should not worry, since Apache or Glassfish will take care of managing the connections. Just make sure that the settings are higher than the ones configured in your web and application server.

You can check all three by issuing the following command:

  • cat /proc/sys/net/ipv4/tcp_keepalive_intvl
  • cat /proc/sys/net/ipv4/tcp_keepalive_probes
  • cat /proc/sys/net/ipv4/tcp_keepalive_time
Glassfish Tuning – KEEP ALIVE SETTINGS

As mentioned above, Apache and Glassfish regulates this and other options at the protocol level. Interesting at this point are:

  • timeout-seconds: this is the time that GLASSFISH waits for a new request, before closing the connection. Since the internal thread behavior will be controlled by APACHE (it is APACHE’s thread after all), you should set it to unlimited, “-1″ (default ist 30)
  • max-connections: this is the maximum number of requests a client can send over a connection before GLASSFISH closes the connection. Since the internal thread behavior will be controlled by APACHE (it is APACHE’s thread after all), you should set it to unlimited, “-1″, and let APACHE clean the opened  threads (the ones opened in Glassfish) and connections by itself (default is 256)
<protocol name="http-protocol">
<http xpowered-by="false" timeout-seconds="-1" max-connections="-1"
default-virtual-server="server" compressable-mime-type="text/html,
text/xml,text/plain,text/javascript,text/css" compression="on"
server-name="">
<file-cache enabled="false" />
</http>
</protocol>
Glassfish Tuning – Operating System – CONNECTION MANAGEMENT

tcp_fin_timeout: 

If maintaining a connection for you is expensive, you should reconfigure this parameter. This basically holds a connection in a TIME_WAIT mode (after the keep alive timeout expired), waiting for the same client to reinitiate the communication. Basically it says that it is cheaper to reactivate a sleeping connection, than to build a new one.

A simple way to see how many connections you have in TIME_WAIT is using netstat:

watch –interval=1 “netstat | grep -c TIME_WAIT”

“This determinesthe time that must elapse before TCP/IP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or twice the maximum segment lifetime (2MSL) state. During this time, reopening the connection to the client and server cost less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, providing more resources for new connections. Adjust this parameter if the running application requires rapid release, the creation of new connections, and a low throughput due to many connections sitting in the TIME_WAIT state.”
More on this here:

IBM WebSphere Tuning

You can view and set this parameter as follows:

View: cat /proc/sys/net/ipv4/tcp_fin_timeout

Set: echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout

Glassfish Tuning – Compression

Most modern browsers can handle compressed content, decompressing it upon receiving it. It is a great idea to compress content, which will save you a lot of bandwidth.

Of course, this comes with a price – CPU is needed for compressing the files. Since you do not want your application server to use resources for compression, leave it to the web server. I have detailed compression in the web server in chapter 3, Configuring Apache.

If you still think that compression is a good idea, or you just want to play with it, these are the parameters you can change

compression-min-size-bytes – defines the minimum size of files where compression will be applied. Everything equal or greater than this will be compressed and sent compressed to the client.

compressable-mime-type – defines which extensions will be taken into consideration for compressing

compression – sets the compression on or off

<protocol name="http-protocol">
<http ......compression-min-size-bytes="4096"compressable-mime-type="text/html,
text/xml,text/plain,text/javascript,text/css" compression="on" server-name="">
<file-cache enabled="false" />
</http>
</protocol>
Glassfish Tuning – Chunking

No point reinventing the wheel…:) I will just quote wiki on that :

” Chunked transfer encoding is a data transfer mechanism in version 1.1 of the Hypertext Transfer Protocol (HTTP) in which a web server serves content in a series of chunks. It uses the Transfer-Encoding HTTP response header in place of the Content-Length header, which the protocol would otherwise require. Because the Content-Length header is not used, the server does not need to know the length of the content before it starts transmitting a response to the client (usually a web browser). Web servers can begin transmitting responses with dynamically-generated content before knowing the total size of that content.

The size of each chunk is sent right before the chunk itself so that a client can tell when it has finished receiving data for that chunk. The data transfer is terminated by a final chunk of length zero.

The introduction of chunked encoding into HTTP 1.1 provided a number of benefits:

        • Chunked transfer encoding allows a server to maintain a HTTP persistent connection for dynamically generated content. Normally, persistent connections require the server to send a Content-Length field in the header before starting to send the entity body, but for dynamically generated content this is usually not known before the content is created.[1]
        • Chunked encoding allows the sender to send additional header fields after the message body. This is important in cases where values of a field cannot be known until the content has been produced such as when the content of the message must be digitally signed. Without chunked encoding, the sender would have to buffer the content until it was complete in order to calculate a field value and send it before the content.
        • HTTP servers sometimes use compression (gzip) or deflate methods to optimize transmission. Chunked transfer encoding can be used to delimit parts of the compressed object. In this case the chunks are not individually compressed. Instead, the complete payload is compressed and the output of the compression process is chunk encoded. In the case of compression, chunked encoding has the benefit that the compression can be performed on the fly while the data is delivered, as opposed to completing the compression process beforehand to determine the final size.”

Why disable a great feature?  Chunking comes enabled with Glassfish. Leave it as is.

Logical renaming of thread pools for better resource management and monitoring

The default configuration of Glassfish comes with three configured listeners:

  • http-listener-1 – for http requests
  • http-listener-2 – for https secure requests
  • admin-listener – for admin purposes (opening the web administration console)

The problem is that all of these listeners are configured to work on a single thread pool, thread-pool-1:

 <network-listener port="8080" protocol="http-listener-1" transport="tcp"
name="http-listener-1" thread-pool="http-thread-pool" />
 <network-listener port="8181" protocol="http-listener-2" transport="tcp"
name="http-listener-2" thread-pool="http-thread-pool" />
 <network-listener port="4848" protocol="admin-listener" transport="tcp"
name="admin-listener" thread-pool="http-thread-pool" />

This is not that optimal… If you find yourselve in the position that all threads of a thread pool are busy, you cannot even log in to your administration console.I do not agree to this “shared thread pool” model, so i recommend the logical organization and splitting of listeners, thread pools and protocols. This enables also a better monitoring of the threads, since each of them will appear with it’s own name, as in the example below:

And this is how the configuration would look like in the domain.xml file:

<network-config>
<protocols>
<protocol name=”http-protocol”>
<http xpowered-by=”false” max-connections=”-1″ default-virtual-server=”server” compressable-mime-type=”text/html,text/xml,text/plain,text/javascript,text/css” server-name=””>

<file-cache enabled=”false” />

</http>
</protocol>
<protocol security-enabled=”true” name=”secure-protocol”>
<http xpowered-by=”false” default-virtual-server=”server” compressable-mime-type=”text/html,text/xml,text/plain,text/javascript,text/css” server-name=””>
<file-cache enabled=”false” />
</http>
<ssl ssl3-enabled=”false” cert-nickname=”s1as” />
</protocol>
<protocol name=”admin-protocol”>
<http default-virtual-server=”__asadmin” server-name=””>
<file-cache enabled=”false” />
</http>
</protocol>
<protocol name=”jk-protocol”>
<http xpowered-by=”false” max-connections=”-1″ default-virtual-server=”server” compressable-mime-type=”text/html,text/xml,text/plain,text/javascript,text/css” server-name=””>
<file-cache enabled=”false” />
</http>
</protocol>
</protocols>
<network-listeners>
<network-listener port=”8080″ protocol=”http-protocol” transport=”tcp” name=”http-listener” thread-pool=”http-thread-pool” />
<network-listener port=”8081″ protocol=”secure-protocol” transport=”tcp” name=”secure-listener” thread-pool=”secure-thread-pool” />
<network-listener port=”4848″ protocol=”admin-protocol” transport=”tcp” name=”admin-listener” thread-pool=”admin-thread-pool” />
<network-listener port=”8009″ protocol=”jk-protocol” transport=”tcp” name=”jk-main-listener-1″ jk-enabled=”true” thread-pool=”jk-main-thread-pool1″ />
</network-listeners>
<transports>
<transport name=”tcp” />
</transports>
</network-config>
<thread-pools>
<thread-pool max-thread-pool-size=”10″ name=”http-thread-pool” />
<thread-pool max-thread-pool-size=”10″ name=”admin-thread-pool” />
<thread-pool max-thread-pool-size=”50″ name=”secure-thread-pool” />
<thread-pool max-thread-pool-size=”1024″ name=”jk-main-thread-pool-1″ />
<thread-pool max-thread-pool-size=”200″ name=”thread-pool-1″ />
</thread-pools>

Conclusion

There are of course a lot of other options and “tunable” parameters. I have not tried to cover all of them, since this would be out of my power, knowledge, and before of all, time. It is probably the blog post for which i have been working most, so i would be delighted if this will be used/tried/implemented by some of you.

I would be also amazingly happy to receive remarks/critics or comments, since this is the result of a lot of brainstorming and try/err experiments.

I will update this blog post as i go along and find new and interesting things that have to be taken care of when configuring Apache and Glassfish for performance.

Thanks to Niels and Mario for their creativity, flexibility and agility. Together we are strong! :)

All best,

Alex

  1. October 7, 2011 at 5:22 pm

    Excelente material =) very useful

  2. adalfaandrea
    February 5, 2012 at 10:42 am

    Impressive post, it would be my reference for the future apache/glassfish implementation (currently i’ve made just one ).
    Just a few questions:
    did you choose the worker mpm over event mpm for some particular reasons (stability or better performance)
    Maybe i’ve missed: is there any particular differences in using mod_jk or the reverse proxy capabilty?

    thanks

  3. February 9, 2012 at 11:50 am

    Hi Andrea,
    thanks for the comment. There are not that many that can read this to the end, i think:) I am glad that you can use the information provided here, and hope it will work for you. Please let me know if i can further help.
    Event mpm tries indeed to solve the KA problem, but is currently in experimental mode. As i cannot afford any “experiments”, i went on with the worker configuration.

    mod_jk is known to be very stable, and under further developing by the tomcat community. its configuration is easy to maintain. It seems to be the first (proven) choice for highly available/ under load systems…so i’d say community experience was a big point for mod_jk.

  4. Frederik
    April 27, 2012 at 11:21 pm

    Hi Alexandru,
    thanks for this impressive post!
    I found this post while researching on some issues i hit while performance tuning a web application at a costumer.

    Seems you’ve touched nearly everything i was / am hitting in a similar architecture and the most difficult part is debugging interactions between various layers, browser / network / system’s tcp stack / protocols / apache / application server.

    As for things i noticed in your post, not sure if it’s a good idea for a single instance of Apache to go up to more than a K threads in a single machine, at least one with less than 16 CPU’s. At least based on what i know CPU scheduling works and old UNIX legends. What about context switching?

    Things that i noticed (similar to your research):

    putting mod_deflate made an impressive performance boost in my case, without noticeable CPU performance degradation.

    This made me look at the network part (deflate lowered the effect of TCP slow start was my thinking), so i went on checking HTTP Keepalive settings. Your tuning math is much more involved, i would look better at it and try to check it with real data if / when i can, mine was a simple checking of main time of opening the most used pages and using it as KeepAliveTimeout (MaxKeepalive left at it’s default, 100). Actually it made a good performance boost.

    This days I’m researching on the “spoon feeding” side of web serving, Apache tuning and TCP stack buffer sizes and tuning, trying to get significant monitoring data for making decisions for further tuning.

    On the network side, do you have any ideas about the “spoon feeding” effect? For now i’m just getting the input buffer / out buffer with a netstat | awk simple script. From that i will try to infer what buffer sizing to do.
    And, thinking about bufferbloat, how can it interact with this buffering? For what (little) i know about it, modifying buffers in the endpoints of a connections does little, but who knows.

    And a suggestion,
    mod_expire / mod_cache really helped in my case. In this cases too i’m trying to find the best metrics for fine tuning the configuration, suggestions welcome :)

    Anyway,
    thanks for sharing your post!

  5. June 22, 2012 at 3:09 pm

    Hi Alexandru, I must say I am really indebted to you for this article. You have cleared all the questions I have ever had. Thanks a zillion!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 33 other followers

%d bloggers like this: