[ Team LiB ] Previous Section Next Section

15.7 Tuning JMS

Messaging is an important feature for many large J2EE applications. Tuning JMS is an important and relatively straightforward topic. I cover it here rather than dedicating an entire chapter to JMS. For the full details on JMS, I recommend Java Messaging Service by Richard Monson-Haefel, David A. Chappell, and Mike Loukides (O'Reilly).

Remember the following points to ensure optimal JMS performance:

  • Close resources (e.g., connections, session objects, producers, and consumers) when you finish with them.

  • Start the consumer before the producer so the initial messages do not need to queue when waiting for the consumer.

  • Nontransactional sessions are faster than transactional ones. If you have transactional sessions, try to separate nontransactional messages and use nontransactional sessions for them.

  • Nonpersistent messages are faster than persistent messages.

  • Longer messages take longer to deliver and process. You could compress message bodies or eliminate nonessential content to keep the size down.

  • The redelivery count should be specified to avoid indefinitely redelivered messages. A higher redelivery delay and lower redelivery limit reduces overhead.

  • Set the Delivery TimeToLive value as low as is feasible (the default is for messages to never expire).

  • A smaller Delivery capacity increases message throughput. Since fewer messages can sit in the Delivery queue, they have to be moved along more quickly. However, if the capacity is too small, efficiency is reduced because producers have to delay sending messages until the Delivery queue has the spare capacity to accept them.

Some more advanced architectural considerations are also worthy of note. As with most architectures, asynchronous processing is more scalable than synchronous processing. JMS supports the asynchronous reception of messages with the MessageListener interface, which you should use. Similarly, processing in parallel is more scalable, and again, JMS supports parallel-message processing with ConnectionConsumers that manage ServerSessionPools .

When messages are sent in high volumes, delivery can become unpredictable and bursty. Messages can be produced far faster than they can be consumed, causing congestion. When this condition occurs, message sends need to be throttled with flow control. A load-balancing message queue may be needed for a high rate of messages (for example, more than 500 messages per second). In this case, you probably need to use duplicate delivery mode (Session.DUPS_OK_ACKNOWLEDGE). Duplicate delivery mode is the fastest possible delivery mode. In duplicate delivery mode, messages are sent and, if the acknowledgment is delayed long enough, a duplicate message is sent rather than conversing with the server to determine whether the message was received. This mode is more efficient than auto mode (Session.AUTO_ACKNOWLEDGE), which guarantees that messages be sent only once. However, with duplicate delivery mode, you need to identify whether the message has already been processed because it may be sent more than once. The third mode, Session.CLIENT_ACKNOWLEDGE, consists of synchronous message sends with corresponding acknowledgments; it is not recommended for high-performance message delivery.

When dealing with large numbers of active listeners, multicast publish-and-subscribe is more efficient than broadcast or multiple individual (unicast or point-to-point) connections. (Note that JMS does not currently support broadcast messaging, only publish-and-subscribe and point-to-point messaging). When dealing with large numbers of listeners with only a few active, or when dealing with only a few listeners, multicasting publish-and-subscribe is inefficient, and point-to-point communications should be used. Inactive listeners require all missed messages to be re-sent in order when the listener becomes active, which would put too heavy a resource load on the publish-and-subscribe model. For this latter scenario, a unicast-based model of message queuing, organized into a hub-and-spoke model, is more efficient than multicast.

    Previous Section Next Section