jLupin Next Server
  • Entering the world of microservices hasn’t been so easy!
  • Increase your efficiency, enable 24/7, reduce TCO…
  • … keeping the architecture simple and scalable!
play video

Overview

Entering the world of microservices
hasn’t been so easy !

 

JLupin Next Server is able to use the available server capacity and automatically setup a full high performance microservice environment with unique functionalities and simple interface. 

Here is the story of transforming an application to microservices architecture, which can easily evolve to sophisticated Enterprise Microservices Bus. He is the story how JLupin makes this process easy, gradual and efficient. 

Let’s start from standard 3-tier application that  provides web interface for end users. Nowadays it’s hard to find totally independent application, so we add interface for communication with an external system (the diagram below).

It’s simple, quite ordinary, it works… but is it enough in the world of 24/7 services and mobile first strategies ?
Is it prepare for future demands ?

Does it ENABLE BUSINESS AGILITY ?  Even if this is enough now, we are sure that it is not enough for tomorrow ! And tomorrow is very close…

So if you want to enter the enterprise world  with your application … the first thing that you should do is.. run your application on  JLupin Next Server.  
This step keeps the architecture simple (look at the diagram below) but gives a lot  of opportunities to go feather and bring completely new quality into the business services.

Now, your application becomes a single JLupin microservice.

 

 

This “small” step on the architecture level, is a huge step in scope of functionalities and opportunities to improve the application environment, for example:

  • Zero downtime deployment (yes, you don’t need a cluster to do that !)
  • Asynchronous request support (good choice for external systems invocations)
  • Built-in health checking
  • Self-healing mechanism to minimize service outages
  • Stability and performance (proven during performance tests)


But this is not the end of your application’s journey with JLupin Next Server. It will be necessary to increase your business agility in scope of time-to-delivery for parallel changes in your code. One of the answers for this challenge is microservices architecture, where business parts of a applications is divided and  run as independent piece of code provided in the environment as services. Usually this requires very complicated technology stack for automation, many additional solutions have to be engaged  and a lot of work should be done.

In case of JLupin Next server the requirements are significantly lower, actually there NO specific requirements at all ! Microservice architecture is natural evolution of any application and could be provided on JLupin Next Server without additional cost. The only thing that you must do is... devide your single microservice into "smaller" ones and establish communication between them (look at the diagram below).


With JLupin, microservices are run in separated  runtime environments (JVM) and communicates with each other through built-in software loadbalancers. Every component of microservices landscape is ready to use, provided on the JLupin Next Server Platform.

Ok, so we have the microservices architecture… but this is not the end of the journey through JLupin capabilities.

We are living in the world of “mobile first” approach and where everything is integrated with each other. With JLupin Next Server you don’t need to prepare your application for integration with mobile and any other IT system.  Your code is automatically exposed as services in the environment with the same communication semantics as in case of JLupin Java Remote Invocation protocol used by software balansers. Look at the diagram below – it’s easy, it’s natural.
 



 

This stage of the application development is a good entry point for entering into local (or global if you like) Enterprise Microservices Bus, where on the same piece of hardware you can build at least two-layer application stack, and provide business logic as services for different fronetnd applications with all unique capabilities of JLupin Next Server.
At this stage, the business logic layer aggregates atomic services available on the second layer (communication is protected by software balancers) and provides data processing that is transferred to frontend components. All applications components of such architecture run as independent microservice in the consistent environment controlled by  JLupin Next Server.



Key features of such architecture:
 

  • Each microservice is a separate JVM process with memory allocation and pool of threads.
  • Each microservice is a separate jar file containing ordinary JAVA classes managed by Spring container.
  • All services can communicate using unified interfaces with JLupin software load balancers.
  • Each microservice is automatically available through default (starting automatically with JLupin Next Server) communication interfaces, i.e.:
 
  • JLupin Java Remote Invocation (possibility of calling methods with ordinary JAVA interfaces through software load balancers being the part of JLupin Next Server distribution)
  • JLupin Web Service http - XML – possibility of calling methods using the http protocol and in/out representation with XML
  • JLupin Web Service http - JSON – possibility of calling methods using the http protocol and in/out representation with JSON
  • JLupin Asynchronous Queue – possibility of making an asynchronous calling a method on a given class – all that without setting up additional queue servers.
 

 
If you would like to move the atomic layer to separate OS – no problem, you can do it, remember to change addresses in software balansers configuration.
If you would like to put more power into your environment, increase the level of availability and build a cluster of processing nodes – no problem, you can do it, once again – don’t’ forget to change software balancers’ configuration. Nothing else.
 
It should be easy…. And it is, with JLupin Next Server.


You can easily create a project such as below: four microservices aggregate 2 atomic (from 1 to 6) microservices and fifth (5) microservice communicates with seventh (7) atomic microservice.

Sample project screenshot:



 

Each Maven module will be compiled to a separate jar file and executed as a separate JVM process. 
As a result you can easily create an environment similar to the one shown below:
 











You want to know more?
Read about JLupin`s details:



 

JLupin Next Server – what is it?

JLupin Next Server is a full server for Java Server Side type applications which require the highest efficiency and accessibility of the runtime environment.
It is a real enterprise environment of the highest class.

 

1. What does the JLupin Next Server consists of

  • the main server managing microservices as a separate and  independent JVM instance
  • microservices which are separate JVM processes managed by the main server
  • software load balancers
  • queues
  • a whole variety of interfaces, thanks to which, it is possible to communicate with a particular microservice in various ways, regardless of the language or the platform
  • mechanisms of communication between microservices
  • mechanisms of dynamic memory reallocation for microservices and active response to errors such as memory leaks, stack overflow etc.
  • mechanisms of asynchronous subsystem triggering
  • named thread pools which not only allow the separation of triggering of particular classes but also methods
 

JLupin is currently the best solution for SOA – Microservices bus architecture, including the latest architecture of Middleware class solutions based on Microservices, where separation takes place not only at the level of code logics but also at the level of JVM runtime environment.
 

It is important that separation of specific services doesn’t increase the level of environment’s complexity, and in consequence it doesn’t increase operational costs in maintenance department either. By using JLupin Next Server architecture (microsevices managed by the primary server as separate JVMs) and its functionality (embedded load balancers and call internal routing), an easily managed scalable bus of enterprise-class microservices can be built.
 

The following diagrams show the simplicity of the solution for microservices based on JLupin Next Server technology in comparison to the solution using a standard JAVA application server (e.g. Tomcat). When simulating solution development, it was assumed that services on two layers of microservice architecture (aggregating calls for other services – marked in blue , and single services – marked in red) are planned to be launched, with a group of four microservices, in a redundant configuration on two nodes, working on each of the layers.
 

If the complexity is defined as a number of items subject to independent management processes, then:

JLupin Next Server environment complexity = 9.

(1 x load balancer + 4 x operational system + 4 x application server)

complexity of a standard environment composed of a number of application servers = 22.

(2 x load balancer + 4 x operational system + 16 x application server)

Which means that in the case of the solution using JLNS, the complexity depends solely on the scale of operations; in the other case it depends on all components of the environment, including the number of microservices.

Furthermore, in case of a standard solution, it is necessary to introduce additional semantics at the level of service selected URI, which is able to read an external load balancer and direct a request accurately to a proper microservice on a given server.

With JLupin Next Server such compromise does not appear. It is a platform created simply for microservices.

 

 

2. How do classic application containers for Java Server Side operate?

The classic JEE server places all applications as part of one address space – one JVM process. It is the space of the server itself, namely of the JVM process on which it runs. The fact that each application may have its own classloader is irrelevant here.
The disadvantage of that is applications influencing each other. Any kind of memory leak, too much demand for the memory or the thread pool of one application may affect the operation of all applications. The same applies to the number of open file descriptors in Linux class systems (they might be open sockets for databases of other subsystems or other simply open files) – the number of such descriptors is assigned per one process of the operating system. In case of a large number of applications, it may quickly become saturated.

In order to achieve high accessibility on solutions other than JLupin, one application shall be started on one application server (so that one application does not influence another).
As an example, with 10 applications and desire to preserve the solution's security, we are forced to start each of them on at least two nodes (which gives 20 application servers). In comparison, when using JLupin Next Server, 2 (TWO) nodes are sufficient for the correct functioning of all 10 applications running as microservices.

Furthermore, in other solutions, there is a question of addressing and placing external load balancers, which, depending on the trigger, must address other ports of various servers on which individual applications run.

Of course, the remaining issue is that of the triggering systems' 'knowledge' or such a configuration of load balancers which tells them where to direct traffic. In case of JLupin Next Server this is done automatically.
 

3. How does JLupin Next Server works?

JLupin Next Server is the first server for applications written in the server side type JAVA language which implements the multicontainer application server approach. Applications running under control of JLupin Next Server are called microservices.

The main server which is a separate JVM process manages microservices: starts, stops, restarts and during the time of operation directs traffic to a particular microservice, monitors memory and prevents memory leaks.

Each microservice is a separate JVM process with its own share of memory and thread pool - the figure below illustrates it.


Because of that, each microservice has its own JMV - own stack, own thread pool - including defined thread pools.

Microservices are completely independent from each other, the operation of one microservice does not impact the operation of others'. I.e. high stack usage does not impact memory inaccessibility for another microservice. GC's work is only beneficial for one microservice.

By having such an operating architecture, JLupin Next Server may implement of its main assumptions:
 

3.1 Zero downtime deployment. How does JLupin implement it in practice?

The process: once the new microservice's files are loaded replacing the old ones, the server starts a new version of the microservice as a new JVM process. Until the new process works correctly, all requests are redirected to the previous version of the microservice. Once the new microservice version stars correctly JLupin starts directing new requests to the new version. Old requests which are supported by the old version of the microservice are supported till the end. Once they are finished correctly, the old process is switched off by the main server.

This process does not require any administrative stoppage of any of the infrastructure elements and the whole process is carried out without losing any requests.

The whole process only requires one command from the administrator:

appRestart microservice_name


Before carrying out the redeployment operation, the server automatically archives the current version of the microservice. Therefore, it is possible to reinstate the old version of the microservice using a particular server command.

Archiving, along with setting the new version of the microservice is carried out by a simple administrator's command:

appRestartSafe microservice_name


JLupin also enables percentage (15%) redirection into the new version of the microservice - it is carried out by the following command:

appStartPercentMove microservice_name 15


Depending on the decision, the whole traffic may be redirected into the new version of the microservice by giving the following command:

appAcceptNew


or by stopping the new microservice version:

appStopPercentMove


Furthermore, upon request, we can bring back any version of archived microservice completely online, by giving the following command:

appRestore microservice_name 1.4

 which will reinstate microservice in the 1.4 version.

The version changing process in the zero downtime deployment mode is illustrated in the figure below.

 

3.2 OutOfMemoryError automatic memory leakage error monitoring or StackOverflowError

In addition, the main server monitors the memory of each microservice and when OutOfMemoryError or StackOverflowError occur, a new instance of the microservice is started with new assigned memory. It carries it out using exactly the same algorithm as the one used during the zero downtime deployment process described above. Assigning new memory takes place in accordance with a particular algorithm – default implementation is 3 * 15 % from the last assignment.

Furthermore, what is important when an error related to memory leakage occurs, the server automatically creates a dump file from the JVM's stack which can be automatically analysed by DevOps or/and programmers using software for analysing the JVM's execution stack.

The memory assignment operation is illustrated in the picture below.



 

3.3 What is the JLupin microservice ?


The JLupin microservice is an application consisting of ordinary JAVA classes locked in a particular archive (jar) or archives.

They do not implement any additional interfaces which determine the way of accessing them.


It is only required to provide an application container which will upload all classes and make them visible to JLupin itself.

At this point, JLupin supports the default application container which is the Spring container. Because of that, a programmer knowing Spring shall find the ideal environment to start their application on the JLupin Next Server platform.

Code samples:

interface:

package com.jlupin.sampleapp.paramsfunction.service;


public interface DigestService {
    String getMD5Digest(String name, String surname) throws Throwable;
}


Implementation (with Spring annotation)

package com.jlupin.sampleapp.paramsfunction.service.impl;


import com.jlupin.impl.util.security.JLupinSecurityUtil;
import com.jlupin.interfaces.container.runtime.JLupinRuntimeContainer;
import com.jlupin.interfaces.logger.JLupinLogger;
import com.jlupin.sampleapp.paramsfunction.service.DigestService;
import com.jlupin.sampleapp.paramsfunction.util.Utils;
import org.springframework.stereotype.Service;


@Service("digestService")
public class DigestServiceImpl implements DigestService {


    public DigestServiceImpl() {
    }

    @Override
    public String getMD5Digest(String name, String surname) throws Throwable {
            MessageDigest messageDigest = MessageDigest.getInstance("MD5");
            byte [] buffer = stringToDigest.getBytes();
            messageDigest.update(buffer);
            byte [] digestBuffer  = messageDigest.digest();
            return Utils.byteToHex(digestBuffer);
    }
}



In such an approach, the external container looks after the components' life cycle, along with such mechanisms as transaction managers, ORM mechanisms and others.

Because the application container (JLupinApplicationContainer) is an abstract layer, the developer may decide himself about using another application container.

The place of application container in the data transfer process is illustrated below


 

3.4  Automatic and transparent sharing of services using various access interfaces (including queues which ensure full messaging)

JLupin automatically displays the following access interfaces to classes and methods which were written as part of a particular microservice.

What is important, displaying particular classes and methods does not require any additional, imported interfaces, annotations and descriptions in xml language which would describe the way in which a particular access to class should be carried out.

Interfaces, in order:
 

1. Interface enabling JAVA remote methods – using JLupin Java Remote Invocation Interface (invoke sample service from previous (3.3) chapter):

Example:

@Test
public void test() throws Throwable {
        DigestService digestService = produceDigestService();
        String digest = digestService.getMD5Digest("name", "surname");
        logger.info("current digest:" + digest);
}

private DigestService produceDigestService() throws Throwable {

        String ip = InetAddress.getLocalHost().

        JLupinInputParameter jLupinInputParameter = new JLupinInputParameter("digestService",
                             "sessionId", "requestId", ip, new Locale("en"), 
                             "userName", new String[] {"user_privilege_1", "user_privilege_2"});
  
        return jLupinProxyObjectProducer.produceObject(DigestService.class, jLupinInputParameter);
}

 

2. Queue interface enabling placing a task in the queue (execution request for any class and its method) – as a result of the reply we only obtain the number of the task given by the server – example and triggering plan (invoke "digestService" again but by asynchronous way)

Example:

JLupinInputParameter jLupinInputParameter = new JLupinInputParameter("digestService",
                             "sessionId", "requestId", ip, new Locale("en"), 
                             "userName", new String[] {"user_privilege_1", "user_privilege_2"});

jLupinInputParameter.setAsynchronousBusName("simpleBus");

JLupinOutputParameter jLupinOutputParameter = jLupinDelegator.delegate(jLupinInputParameter);

String taskId = jLupinOutputParameter.getResultObject();


 

3. Interface enabling WEB SERVICE SOAP-XML :
Example input:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ent="http://entrypoint.impl.jlupin.org/">
   <soapenv:Header/>
   <soapenv:Body>
      <ent:jLupinService>
         <jLupinInputParameter>
         <applicationName>firstSampleApplication</applicationName>
           <locale>pl</locale>
            <methodName>getMD5Digest</methodName>
              <privilegeList>
               <privilegeName>privilege</privilegeName>
            </privilegeList>
            <requestId>request1</requestId>
            <sequenceName>sampleParamArrayXmlInOutSequence</sequenceName>
             <serviceName>digestService</serviceName>
            <sessionId>session1</sessionId>
            <user>test_user</user>
             <busName>busName</busName>
             <paramArray><![CDATA[<string>name</string>;
             <string>surname</string>]]>
            </paramArray>
         </jLupinInputParameter>
     </ent:jLupinService>
   </soapenv:Body>
</soapenv:Envelope>

 

Example output:

<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
 <env:Header/>
  <env:Body>
   <jlns:jLupinServiceResponse xmlns:jlns="http://entrypoint.impl.jlupin.org/">
    <jLupinOutputParameter>
     <executedServiceError>false</executedServiceError>
     <result><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
             <java version="1.8.0_45" class="java.beans.XMLDecoder">
             <string>50382149719ab04e47a966ad959f4241</string>]]>
     </result>
    </jLupinOutputParameter>
   </jlns:jLupinServiceResponse>
  </env:Body>
</env:Envelope>

 

4. Interface enabling WEB SERVICE HTTP-JSON:

Example input:

{
  "jLupinInputParameter": {
    "applicationName": "firstSampleApplication",
    "locale": "pl",
    "methodName": "getCurrentDate",
    "privilegeList": [ "privilege_1", "privilege_2" ],
    "requestId": "request_1",
    "serviceName": "dateTimeService",
    "sequenceName":"sampleParamArrayJsonInOutSequence",
    "paramArray":["java.lang.String:\"name\"","java.lang.String:\"surname\""],
    "sessionId": "session_1",
    "user": "test_user",
  }
}

 

Example output:

{
 "jLupinOutputParameter": {
 "result": "50382149719ab04e47a966ad959f4241",
 "executedServiceError": false
 }
}


 

3.5 Software load balancers

Load balancers, offered by JLupin Next Server differ significantly from the ones existing on the solutions market. The previously available load balancers (both software and equipment ones) lose a request in a situation where it has already been sent to a particular node which crashed. The subsequent requests are redirected correctly to the working nodes. JLupin Next Server is able to save any such request and redirect it the nearest and least loaded working node. In case of the algorithm based on the 'health checking' model, the server assesses the following parameters in order to select an optimal node - the number of threads which process the current request, used memory and available memory.

Load balancers which are parts of JLupin, are a part of the client application and are contained in the jlupin-client.jar library

The location of load balancers is illustrated below (figure 8).

 

3.6 Communication between microservices

JLupin provides api (based on JLupin Java Remote Object Invocation Interface) for communication between microservices. It is an alternative to the access interface provided by the microservice (e.g. web service, rest). For communication between microservices, api also uses the mechanism of load balancing, the same one which is used by client applications. The alongside picture illustrates that (figure 9).

 

3.7 Defined thread pools

JLupin provides a defined threads pool - they separate and protect triggering requests, one by one, so that certain request groups which may have long reply time, do not impact other requests. The configuration goes much deeper and secures much better than standard thread pools of EJB components on classic JEE solutions.

In case of the solution provided by the JLupin Next Server, the separation is on the level of the classes themselves and their methods. The below picture illustrates that (figure 10).

 

3.8 Memory class loaders

Server class loaders, during the server's start-up, load all classes to memory, releasing file descriptors as a result of which it is possible to exchange all files during the microservice's operation, without the risk of the server being unable to find a particular class for download, providing it has not been used, yet. As a result of class loaders operating this way, it is possible to carry out the zero downtime – deployment demand.
 

3.9 Asynchronous method triggering

JLupin provides API for triggering particular parts of the code in the asynchronous mode.

This enables the acceleration of the main thread's activity for a particular request and limits the time. The below picture illustrates that (figure  11).

Comparing the classic infrastructure with the JAVA servers available on the market to the infrastructure using the JLupin Next Server platform.

Figures 12 and 13 depict the difference in environment configuration for 4 independent microservices in the services layer.

Using a solution other than JLupin Next Server requires the provision of 8 instances (physical or virtual) of the operating system, 8 application servers, configuration, provision of external load balancers and skilful traffic management depending on the origin and destiny of a particular request.

 

Figure 12 – classic infrastructure without using the JLupin Next Server.

 

Figure 13 depicting the infrastructure with the use of a solution based on the JLupin Next Server. Only requires the use of 2 instances of operating system (physical or virtual). Requires less use of main memory, does not require application server instance overhead. Does not  require external load balancers. Does not require additional, external configuration as everything is returned in input messages.

 

Comparing the classic AWS cloud infrastructure with the JAVA servers available on the market to the infrastructure using the JLupin Next Server platform, also in the AWS cloud.


The same applies to the configuration in the cloud solution. The solution cost in Amazon cloud will be lower with the use of JLupin Next Server.

Let's look at the figures below which illustrate the placement of 4 applications - each one on a separate node in the AWS cloud in the redundant configuration.

In figure 14, the solution requires 8 application servers (it is less important whether it stands on a clean operating system, a virtualised one or on a Docker) and 4 external load balancers. Furthermore, because Amazon requires the use of commercial elastic load balancers for each application, this greatly increases the operating costs.

The JLupin Next Server, on the other hand, allows for limiting the solution's structure to two operating systems (they may also be dockers or other VMs) and two application servers. In addition, the JLupin Next Server provides its own software load balancer which is a part of the triggering microservice. This is illustrated in figure  15.