Overview
General purpose & assumptions
JLupin Edge Balancer is a NGINX instance with LUA module, which includes prepared by JLupin custom LUA module (called 'jlupin') to integrate it with Main Server. The basic assumption is that Main Server provides all necessary information for NGINX on HTTP_INFORMATION_PORT (/listMicroservices
and /nodeInfo
entry points), which is used by NGINX to perform auto-configuration process providing controlled access to microservices in the environment using HTTP interfaces (HTTP in case of servlet microservices and ELASTIC_API in case of native ones).
Virtual servers
By default JLupin Edge Balancer runs the following virtual servers (as a HTTP server):
- edge8000 (port: 8000) - default 'data' type virtual server, where services from servlet microserivces are implicitly provided if it hasn't been defined differently in their configuration files. See next chapter to get to know more.
- edge8001 (port: 8001) - additional 'data' type virtual servers, where services from microservices might be provided, if it has been defined in their configuration files accordingly. See next chapter to get to know more.
- edge_admin (port: 8888) - default 'admin' type virtual server, where only administration services are provided (for example - JLupin Web Console, available at
/webcontrol
context). The SSL turned on on this virtual server by default. - edge_discovery (port: 8889) - additional 'admin' type virtual server, which functionalities has been limited to discovery functions, on which SSL is turned off. This port is used by other nodes to discover services and built their edge balancer configuration.
You can define any number of 'data' type virtual servers to provide additional security mechanism (for example. external services on port X, internal service on port Y and back office frontends on port Z). There could be more 'admin' and 'discovery' virtual servers, but we advice to set it carefully along with our consulting services.
The type of virtual server determines what kind of services are provided :
- on 'data' virtual servers (interfaces) only user's microservices are provided.
- on 'admin' virtual servers (interfaces) none of user's microserivces are provided (regardless their configuration), only administration tools provided by JLupin are available here
- on 'discovery' (special 'admin') virtual servers (interfaces) only informational entry points are available, useful in process of discovering the environment (it is used by JLupin Edge Balancer itself).
Intra-zone routing
One of the most important feature (we can even say - mission) of JLupin Edge Balancer is to simplify the access to JLupin environment through HTTP(S) protocol. Services provided by JLupin can be used by other systems in the environment and/or used by end clients, our goal is to ensure the easiest (while keeping reliability and performance at the highest level) way of accessing them.
Sins JLupin Platform 1.5 we've introduced the intra-zone routing feature that allows to access services using HTTP(S) regardless on which node the microservice that provides those services has been deployed. There is only two conditions of such behavior:
- the node must share the same
name
attribute inZONE
section in Main Server configuration (see how it looks like in configuration). In JLupin 'slang' - nodes are in the same zone in that way. - on each node, JLupin Software Load Balancers must be configured to connect to all other nodes that are in the same zone (see above). In JLupin 'slang' - the zone is connected.
In that way each node performs additional discovery tasks (using discovery virtual server, described in the previous chapter) and builds local proxy & balncer configuration including remote services. Let's illustrate this feature on the simple example:
Let's assume that we have two nodes:
t-app1
(ip: 10.2.2.31)t-app2
(ip: 10.2.2.32)
witch JLupin Platform installed & running. Each node is in the same zone (default
) and zone is connected, which means that the following commands have been executed (on the example of Linux):
t-app1:
control.sh node peer add t-app2 10.2.2.32
t-app2:
control.sh node peer add t-app1 10.2.2.31
Now, each node has the same list of node peers:
t-app1:
control.sh node peers
SrcZone SrcNode SrcIPAddress Zone Node IPAddress CommPorts
default t-app1 127.0.0.1 default t-app1 127.0.0.1 9090,9095,9096,9097
default t-app1 127.0.0.1 default t-app2 10.2.2.32 9090,9095,9096,9097
t-app2:
control.sh node peers
SrcZone SrcNode SrcIPAddress Zone Node IPAddress CommPorts
default t-app2 127.0.0.1 default t-app1 10.2.2.31 9090,9095,9096,9097
default t-app2 127.0.0.1 default t-app2 127.0.0.1 9090,9095,9096,9097
Let's deploy two microservices, part of the demo application:
- SERLVET
exchange
att-app1
- NATIVE
currency-converter-eur
att-app2
Status after deployment:
t-app1:
control.sh microservices status
Zone Node Microservice ProcessID Status Available Activated
default t-app1 exchange 28629 RUNNING yes yes
t-app2:
control.sh microservices status
Zone Node Microservice ProcessID Status Available Activated
default t-app2 currency-converter-eur 17716 RUNNING yes yes
Due to intra-zone routing services are available regardless where they have been deployed - you can access exchange frontend on both nodes:
- GET http://10.2.2.31:8000/exchange/
- GET http://10.2.2.32:8000/exchange/
as well as services provided by exchange backend (currency-converter-eur
):
- POST http://10.2.2.31:8000/_eapi/ELASTIC_HTTP/currency-converter-eur/currencyConverterEurService/convert (
[1000, "USD", "EUR"]
) - POST http://10.2.2.32:8000/_eapi/ELASTIC_HTTP/currency-converter-eur/currencyConverterEurService/convert (
[1000, "USD", "EUR"]
)
You can notice it also using discovery service which show that each node has built all context from all nodes:
GET http://10.2.2.31:8889/_discovery/contexts
:
{
"8000": [
{
"contextName": "exchange",
"routes": [
{
"host": "127.0.0.1",
"discoveryPort": "9098",
"state": "AVAILABLE",
"priority": 1,
"name": "exchange",
"port": 20000
}
],
"state": "AVAILABLE",
"httpStickySession": "false",
"elasticApi": "NULL",
"apiType": "SERVLET"
},
{
"contextName": "currency-converter-eur",
"routes": [
{
"host": "10.2.2.32",
"discoveryPort": "9098",
"state": "AVAILABLE",
"priority": 1,
"name": "currency-converter-eur",
"port": "8082"
}
],
"state": "AVAILABLE",
"httpStickySession": "false",
"elasticApi": "ELASTIC_HTTP",
"apiType": "NATIVE"
}
],
"8001": {},
"8888": {},
"8889": {}
}
GET http://10.2.2.32:8889/_discovery/contexts
:
{
"8000": [
{
"contextName": "currency-converter-eur",
"routes": [
{
"host": "127.0.0.1",
"discoveryPort": "9098",
"state": "AVAILABLE",
"priority": 1,
"name": "currency-converter-eur",
"port": "8082"
}
],
"state": "AVAILABLE",
"httpStickySession": "false",
"elasticApi": "ELASTIC_HTTP",
"apiType": "NATIVE"
},
{
"contextName": "exchange",
"routes": [
{
"host": "10.2.2.31",
"discoveryPort": "9098",
"state": "AVAILABLE",
"priority": 1,
"name": "exchange",
"port": 20000
}
],
"state": "AVAILABLE",
"httpStickySession": "false",
"elasticApi": "NULL",
"apiType": "SERVLET"
}
],
"8001": {},
"8888": {},
"8889": {}
}
Deployment plans
JLupin Edge Balancer can be deployed in two ways:
-
Embedded - it is included in JLupin PLatform package available on our site (download). It acts as a technical microservice managed and monitored by Main Server and it's integrated by specific configuration, which locates user configuration file in the same directory as Main Server (
$JLUPIN_HOME/platform/start/configuration/
). This is the default deployment plan and is suitable for most use cases. -
Standalone - it can be deployed as an independent instance, managed outside JLupin environment. It's fully functional unless it's configured to communicate with Main Server in JLupin environment. This type of deployment plan could be useful if you treat Edge Balnacer as a external balancer for user access that should be deployed on certain network zone, outside application one.