By default JLupin uses its built-in load balancers to create multi node architecture. This chapter describes supported configuration. If you think of any other possibilities, please ask a question through support service or just use the contact form.
NOTICE that Elastic API can be also used to create multi node configuration, but it requires external load balancers or service discovery & repository mechanisms. We recommend to use built-in load balancers by default, especially in case of communication between microservices. The http interface, including Elastic API should be used for external communication, to users (from access layer) and to other systems.
NOTICE that TRANSMISSION port will not be discussed here, because it's not used in service and data flow. The connectivity for management tasks is described in admin guide.
Service repository & discovery
In the standard and recommended configuration every component in JLupin environment (Main Server, servlet microservice and native microservice) has at least one instance of JLupin Software Load Balancer. Every JLupin Software Load Balancer has its own individual in-memory service repository that contains information about where and what microservices are running. This information is used in the process of request routing and balancing between nodes in the environment.
Entries in service repository are managed (added or removed) by service discovery process performed by each of JLupin Software Load Balancers. The service discovery is about periodically polling peers (other Main Servers in the environment) using their INFORMATION PORT (9097), as shown in the following picture.
JLupin Edge Balancer is not present on this diagram because it doesn't participate in this type of communication.
This action is periodic, performed every configured amount of time. This period is specified by
howOftenCheckingMicroservicesInMillis parameter located in Main Server configuration file (main.yml) (see details) and can be overwritten by
howOftenCheckingServerInMillis parameter while JLupin Software Load Balancer is created (see details in JavaDocs - JLupinClientUtil specification).
The list of peers, used by JLupin Software Load Balancers in the service discovery process is located in main.yml file, is section
NODE_PEERS. If we assume that we have two nodes NODE_1 (10.0.0.1), NODE_2 (10.0.0.2) that are configured to invoke services with each other, the
NODE_PEERS section will look as follows:
[...] NODE_PEERS: NODE_1: ip: 127.0.0.1 jlrmcPort: 9090 queuePort: 9095 transmissionPort: 9096 informationPort: 9097 NODE_2: ip: 10.0.0.2 jlrmcPort: 9090 queuePort: 9095 transmissionPort: 9096 informationPort: 9097
[...] NODE_PEERS: NODE_2: ip: 127.0.0.1 jlrmcPort: 9090 queuePort: 9095 transmissionPort: 9096 informationPort: 9097 NODE_1: ip: 10.0.0.1 jlrmcPort: 9090 queuePort: 9095 transmissionPort: 9096 informationPort: 9097
Changes to this list can be done in the following ways:
* manual editing
main.yml (not recommended)
node peer add|remove commands available in JLupin Platform Local Console (recommended)
zone connect command when the environment is managed by JLupin Control Center (see next chapter to get more about zones)
If you have many JLupin Nodes the manual or even using JLupin Local Console configuration of node peers costs a lot. Then you can arrange your infrastructure into zone - a logical group of JLupin Nodes that are managed in the same way in terms of JLupin Software Balancers configuration.
NOTICE that if want to use capabilities of zones you MUST use JLupin Control Center.
A node can be assigned to one zone, which is reflected in its
main.yml configuration file:
ZONE: name: default MAIN_SERVER: name: NODE_1 [...]
The configuration model assumes that JLupin nodes in the zone share the same service repository and the communication between nodes in the zone is unrestricted (every node is able to communicate with any other node in the zone). To enable such communication pattern the
zone connect commands should be executed in the given zone context.
Then, you can establish relation between zones and control in that way the scope od services that are provided to each zone. More about this can be found in the next chapter. The point is that the complexity of configuration process is driven the number of zones, not by number of nodes.
How zones should be established ? What are the boundaries between them ? In most case a zone reflects a set o microservices that share similar communication patterns, load characteristic and the same network and security level. Usually, it turned out that zones and relations between them reflect the general architecture of the system in microservice architecture.
Multi node communication
Communication between microservices
The configuration with two nodes and six microservices will be discussed, which are controlled by JLupin Platform. The diagram shows the example traffic flow, including the following stages:
- A request comes from a user to microservice A (access layer)
- The request is directed from microservice A to microservice C using JLupin Remote Method Call (binary, synchronous). This requests is proceed through built-in load balancer (as a part of JLupin Client), which directs request to a proper node. The load balancer, due to service discovery process, knows on which nodes the microservice C is running and chooses the 'best one' using priority algorithm. The highest priority is local node, from which the request comes from. The load balancer tries to connect to a microservice
singleRequestRepeatsAmounttimes, if it doesn't succeed, the next node from the list is chosen. The request goes to node on the left.
- The request is processed by Main Server and once again hits on load balancer which works exactly the same as descried in point 2. So the request can be redirected to any other nodes when microservice C on the first node is unavailable.
- The business algorithms on microservice C require services provided by microservice B. The invocation is performed through built-in JLupin Client using JLRMC (binary, synchronous) and the same procedure as before. In this example, the algorithm chooses the first node.
- The load balancer on Main Server keeps the flow on the same node and finally the request goes to microservice B on the first node (similarly to description in point 3).
- The business logic requires asynchronous invocation of a service located on microservice E. This kind of requests is proceed similarly to synchronous one. The load balancer chooses the second node to process the request.
- The load balancer on Main Server keeps the flow on the same node and finally the request goes to microservice E on the second node.
NOTICE Dotted lines mean possible routes for request in the process of load balancing.
Communication with external components (through JLupin Client)
In this scenario, some functions are located outside the JLupin Next Server environment. We assume that this external components are java based applications, that implements JLupin Client and use it to communicate with microservices. We also assume, that in this example the external component invokes the same service, located on microservice B (to simplify description the traffic flow).
- A user (or any other trigger) invokes a service on "Client" which then sends a request to services available on microservice B, using JLRMC (binary, synchronous).
- The JLupin Client has a built-in load balancer, that should be configured to communicate with all nodes located in any zone (in case of microservices controlled by JLupin Next Server this configuration is applied automatically). The load balancer also uses INFORMATION interface to get services repository entries and basing on that it redirects traffic to proper node using periodic round-robin algorithm.
The rest of the traffic flow is the same as in previous example.
Communication with external components (through Elastic API)
In this scenario, some functions are located outside the JLupin environment or microservices on are treated as integration or middleware layer. We also assume, that in this example the external component ("Client") doesn't implement JLupin Client and invokes the same service, located on microservice B as in previous example (to simplify description the traffic flow).
A user (or any other trigger) invokes a service on "Client" which then sends a request to services available on microservice B, using Elastic API.
Elastic API in multi node configuration requires additional, external load balancer to ensure high availability of communication. Elastic API covers the microservice B services, and after transformation to JRMC protocol (binary, synchronous) goes to Main Server's load balancer to further processing.
The rest of the traffic flow is the same as in previous examples.
These three examples show how microservices on JLupin can be deployed and integrated in different use cases, involving different protocols and clients.
Communication between zones
Using JLupin Control Center you can configure the connectivity between zones. If zone
zone_A is connected to zone
zone_B it means that all nodes belonging to
zone_A perform service discovery on all nodes in
zone_B and can invoke any service that is available in
zone_B. This relation is directed which means that connection from
zone_B DOES NOT imply invers relation - from
zone_A. The communication between zones are performed using INFORMATION PORT (tcp/9097) for periodic service discovery and JLRMC (9090/tcp) and QUEUE (9095/tcp) for service synchronous and asynchronous (respectively) invocation.
Additionally, you can establish connectivity between zones more selectively by configuring chosen group of nodes from
zone_A to be connected with chosen group of nodes from
zone_B (using a set of commands according to the following command pattern
zone connect zone_A <zona_A_node> zone_B <zone_B_node>). This configuration scenario can be used in order to perform advanced access management between zones, where nodes are authorized to invoke particular services in zone. This idea is presented on the below picture.