Friday, 16 October 2009

Understanding Connectors & Acceptors

Connectors and accepors are concepts which often confuses new HornetQ users. Both connectors and acceptors are defined in HornetQ server configuration (hornetq-configuration.xml) but users are often confused about when and why they need to configure them. They are described in the user manual but I have a few drawings which could help the users understand them better.

An acceptor defines which type of connection are accepted by the HornetQ server.

A connector defines how to connect to a HornetQ server. The connector is used by a HornetQ client.

HornetQ defines 2 types of acceptor/connector

  • invm – this type can be used when both HornetQ client and server run in the same Virtual Machine (invm for Intra Virtual Machine)
  • netty – this type must be used when HornetQ client and server runs in different Virtual Machines (this connector type uses the netty project to handle the IO)

To communicate, a HornetQ client must use a connector compatible with the server's acceptor.

You can connect from a netty connector to a netty acceptor (if they are configured with the same host and port):

You can not connect from a invm connector to a netty acceptor:

You can not connect from a netty connector to a invm acceptor:

You can not connect from a netty connector on port 5445 to a netty acceptor on port 5446:

By default netty acceptors and connectors uses localhost as the server address. If the HornetQ client is not on the same machine than the server, it will not be able to connect to it.

One source of confusion is that HornetQ connectors are configured on the server. But I wrote that connectors are used by HornetQ clients, not servers! Why should I configure connectors on the server?

There are two reasons to configure connectors in the server:

  1. you want to use JMS & JNDI
  2. you want to communicate between HornetQ servers

Using JMS and JNDI

The standard way to use JMS is to lookup JMS resources (ConnectionFactory and Destination) from JNDI.

Context ctx = new InitialContext();
ConnectionFactory cf = ctx.lookup("/ConnectionFactory")
Connection = cf.createConnection();
// the client is now connected to the JMS server

The ConnectionFactory defines how the JMS client can connect to the JMS server. With HornetQ, this means that the ConnectionFactory implementation will use a connector to connect to the HornetQ Server.

First of all, we must define a "netty" acceptor (in hornetq-configuration.xmlk) so that clients can connect remotely to the server:

<acceptor name="netty">
   <factory-class>org.hornetq.integration.transports.netty.NettyAcceptorFactory</factory-class>
  <!-- by default will accept connection on localhost on port 5445 -->
</acceptor>

Then, we define a "netty" connector (in hornetq-configuration.xmlk)so that JMS clients will know how to connect to the server:

<connector name="netty">
   <factory-class>org.hornetq.integration.transports.netty.NettyConnectorFactory<
   <!-- by default will connect to localhost on port 5445 -->
</connector>

Final step is to configure the JMS ConnectionFactory (in hornetq-jms.xml) so that when it is looked up from JNDI, it uses the "netty" connector to connect to the server:

<connection-factory name="ConnectionFactory">
   <connector-ref connector-name="netty"/>
   <entries>
      <entry name="/ConnectionFactory"/>
   </entries>
</connection-factory>

When the HornetQ server is started, it looks like this:

In JNDI, the HornetQ server has stored the configuration associated to the netty connector with the "/ConnectionFactory" binding.

When the JMS client will look up "/ConnectionFactory", it will also retrieve the netty connector configuration and use it to create a netty connector to connect to the server:

To sum up: if you use JMS with JNDI, you MUST configure a connector to connect to the server itself.

Communication between HornetQ server

The other case when you need to define connectors is when HornetQ servers must communicate. For example, they use core bridges, JMS bridges, diverts, they are in the same cluster...

The important thing to remember is that when two HornetQ servers communicate, one server acts as the client of the other server. In that case, the server acting as the client of the other server MUST define a connector to connect to the other server.

Let's take the example of a JMS bridge: Server #1 will host a core bridge which takes messages from the "source" queue on Server #0 and forwards them to the "target" queue:

Server #0 configuration

Server #0 is a regular HornetQ server, its setup will looks like the "JMS & JNDI" case:

  • a "netty" acceptor to accept connections from remote clients (one of its clients will be the bridge on Server #1)
  • a "netty" connector so that clients can connect to it remotely and send messages to the source queue.

Server #1 configuration

Server #1 is a bit more complex. It acts as a HornetQ server with regards to clients consuming from the target queue but it its bridge is acts as a client of Server #0. Its setup requires:

  • a "netty" acceptor to accept connections from remote clients.
  • a "netty" connector so that clients can connect to it remotely and receives message from the target queue (as explained in the JMS & JNDI case)
  • a "source" connector so that the bridge can connect to the Server #0.

Server #1 defines two connectors, "netty" and "source", which serve different purposes: "netty" is used to connect to the server itself (and will be used by its JMS clients) while "source" connector is used to connect to the other server #0 so that the bridge can receive messages from the source queue.

A note on addresses

Both "netty" connectors and acceptors can be configured with a host parameter. However the meaning of this "host" value is not the same for both:

  • a connector will connect to a single server. Its host parameter must correspond to one of the server address (e.g. localhost or macbook.local or 192.168.0.10)
  • an acceptor can accept connections from one or many addresses. You can specify a single address (localhost or 192.168.0.10), a list of comma-separated addresses (e.g. 192.168.0.10, 10.211.55.2, 127.0.0.01), or 0.0.0.0 to bind to all the host network interfaces.

Conclusion

Connector configuration can be confusing at first glance but it becomes much more clearer when you follow these simple rules:

  • If you use JMS with JNDI, you MUST configure a connector to connect to the server itself
  • If a HornetQ server must communicate with another server, you MUST define a connector to connect to the other server

6 comments:

  1. Nice post!

    Maybe on the nex post will be nice if you could talk more about the *invm*.

    Cheers,
    Diego Pacheco.

    ReplyDelete
  2. Hi Jeff, if the connector needs to be the ip address of the server is there an easy way to implement this (i.e. a parameter that resolves to the machine ip address?) I am remote to the server guys and I'd rather not leave it to the monkeys.

    ReplyDelete
  3. Hi Jeff,

    This save my day !

    Just an question for the HornetQ Stomp example, In the configuration xml , there are :

    1. netty-connector
    2. netty-acceptor
    3. stomp-acceptor (stomp protocol on port 61613)

    Yes, the example is working fine but I would like to know , since there is no "stomp-connector" defined, how can the stomp client is able to figure out the stomp-acceptor on port 61613 by just using the netty-connector ?

    Thanks.

    ReplyDelete
  4. @Pen,

    The Stomp client does not use a connector to connect to HornetQ.
    If you look at the client, it simply opens a TCP socket on the port 61613 and sends Stomp frames on it.

    ReplyDelete
  5. Thank you, great post. You saved me tons of time.

    ReplyDelete