Back in the day, business used to be much simpler. The only requirements were all-in-one straightforward solutions which usually ended up as monoliths. And because of that, supporting systems used to be much simpler too. However, over time the risk of ending up with a clumsy, too tightly-coupled system became greater and greater.

These days, markets are changing even more rapidly. You either adapt quickly or you go out of business. And software has no choice but to adapt to this new reality.

Changing the way you develop your system means changing your mindset, learning new solutions and applying the best practices. This is exactly why a dedicated platform might be of great help to newcomers.

Lagom platform

Your first microservices

 

Here comes the Lagom

Lagom is a platform that not only delivers a microservice framework, but also provides a complete toolset for developing applications, as well as creating, managing and monitoring your services. Despite still being a pretty young project (version 1.0.0-M2) which currently targets Java developers, we have decided to give it a try and couple it with the power of Scala.

The Platform is based on popular technologies, mostly from Scala’s ecosystem:

  • sbt (Scala’s build system, project definitions),
  • Play (REST endpoints, Guice dependency injections)
  • Akka (processing)
  • Cassandra (default data storage)

Most of the technology is hidden behind interfaces, and most newcomers won’t need to deal with it directly, although you can explore it you want to. This is extremely important when it comes to developers working on a monolithic J2EE codebase. Lagom will allow them to split a problematic domain easily into different services without having to learn a lot upfront. It also comes with a whole range of improvements – hot redeploy, easier testing and app management.

How is it build?

Since Lagom is built specifically for microservices, by nature the whole concept is synchronized – based on a familiar request-response cycle. Services communicate in a non-blocking way, thus making the whole app much more efficient. Inter-service communication can be done via HTTP, by injecting an API reference or invoking methods.

Typically, every microservice is divided into 2 parts: API and implementation. API is a formal contract. It tells other developers and teams what the given service can do for them and how to interact with it. Implementation is where the code actually lives. It is strongly decoupled from API so it can be evolved separately, as long as the contract holds.

To build an entry point we use a Service trait that allows us to define external endpoints and ServiceCalls which declare how to transform a request into a response. The process closely resembles the mechanism for preparing a service block and delegating each call to a proper service function.

When you access any path on your service then the Play router takes responsibility for matching it against a path descriptor and delegating the call to a proper method. Processing is based on Akka infrastructure, the message-driver actor system, which ensures that calls are processed concurrently, reliably and fast.

Service call declaration example:

Descriptor example:

When it comes to reliability, by default all failures are handled by Lagom’s exception handler and returned to you as HTTP 500 responses. The exception handler can easily be written and changed if you need to. This can be extremely useful if you would like to have total control over your failures.

By implementing your own ExceptionSerializer you can not only control the response codes but also the message body or different responses per accepted types.

Lagom uses a Persistence module backed by Cassandra, a scalable, fault-tolerant database – to support your storage. With the Persistence module, Lagom brings two main concepts to the whole PlatformEvent Sourcingand CQRS.

Event Sourcing is a way of operating on the storage as on a log. This means you can deal with immutable domain events from which you can derive a state. This way, implementation is much simpler and delivers a clear state history and a good write performance.

CQRS (Command Query Responsibility Segregation) has the advantage of separating read from write. This means we can treat both groups differently e.g. scale them differently or pay more attention to processing on the read side without impacting the write itself and vice versa.

If you want to stop Event Sourcing or decide you need a different database, you can simply introduce it yourself by using a proper driver.

Your first microservices

For demonstration purposes, we’ve created a new project that converts values between currencies. The project is based on 2 microservices. The first one is responsible for conversion and the second one delivers currency values. We should be able to modify the currency values on the fly and our changes should be reflected in any calls after modification.

Let’s see how our calculator declaration looks like:

As you can see we define only one REST endpoint for GET method"/api/calculator/exchange?fromValue&fromUnit&toUnit" .

This will allow us to get the result in a toUnit currency of fromValue value.

Besides the endpoint declaration, we have also used some builder methods for the descriptor:

  1. path param serializer from String to BigDecimal – this is needed to serialize our fromValue parameter while passing to the calculate method,
  2. set custom exception serializer – customize th e way we handle some of the exceptionally ended results,
  3. auto acl set to true – by default Lagom services do not have any Access Control Rules allowing us to access given resources and our request will be denied. By setting an auto acl we enable auto generation of access rules to our endpoints. Another possibility would be to define the rules yourself.

Our exchange rates microservice declaration is relatively similar to the previous one:

Here we define two endpoints:

  1. GET endpoint to retrieve current ratio for given from and to units"/api/exchangerates/:fromUnit/:toUnit"
  2. PUT endpoint to set a new ratio for given from and to units"/api/exchangerates/:fromUnit/:toUnit" which requires from us a body like { "rate": 1.23 }

As you can see our example is simple, based on REST and default validations (compilation time), but is complex enough to show the concept.

Custom parameter serializer

There are times when you would like to customize the way request/response parameters are handled. This can be easily achieved by preparing your own param serializer which makes use of Lagom’s path param serializer concept.

As you can see we have used a required factory method, which creates the required parameter for BigDecimal and declares two methods using Scala’s Function1. The first transforms the String url param to BigDecimal (serializes) and the second does the opposite – from BigDecimal it returns String representation.

This serializer allows us to serialize path params like ?value=1.23 to BigDecimals, by declaring a method that takes the BigDecimal as an input

Custom exception handler

Lagom gives you an effective default exception-handling mechanism. However, as your ecosystem grows you will probably want to prepare your own implementation. This approach introduces a variety of ways to keep fine-grained control over your stack.

Above is my attempt at preparing a custom exception handling. In actor systems like this, most exceptions will probably be caused by failed futures (it is common to return a failure from the future by ending it exceptionally).

In our implementation we wanted to have 2 flows: the first is a default flow, which returns the default message. The second processes CompletionExceptions which comes from the CompletableFuture. If we can match a ServerError in the second flow, we can return a custom message – otherwise a default one will be returned.

By doing this we expose those information we really care about.

As you can see a RawExceptionMessage (the main entity of the exception serializer) allows you to specify:

  1. an error code for a transport layer,
  2. a protocol,
  3. a response message.

As you may have noticed, our implementation may be really simple, but it illustrates the core idea of exception serializers very clearly

Let’s use it:

  1. download our example, unzip and navigate to unzipped folder,
  2. run sbt runAll,
  3. access REST url with POST method, filling proper valuesPOST /api/exchangerates/:fromUnit/:toUnite.g. /api/exchangerates/EUR/PLN with body {"rate": 1.23} ,
  4. call the calculation, once more fill the proper values/api/calculator/exchange?fromValue&fromUnit&toUnite.g./api/calculator/exchange?fromValue=1&fromUnit=EUR&fromUnit=PLN,
  5. you should receive a JSON response with message{ "value": 1.23, "currencyUnit": "PLN" } .

Congratulations, you have just made good use of our example!

Scala’s gluing code

As we’ve already mentioned, Lagom is pretty young (M2 artifact) and currently targets java developers. This is why we needed to use some glue code to provide more Scala-like syntax while coding our services. This type of code can be found in the utils project.

The snippet contains some implicit conversions from Scala functions to Java functional interfaces such as BiConsumer, BiFunction and a few more, but in some cases it was also necessary to specify the return type or the parameters explicitly, because some types of system have had some problems recognizing it (e.g. in ExchangeStorage).

The need for the Utils project should disappear once the Scala DSL is ready. In addition, at the same time some places in the exchange rates and the calculator projects should simplify things a lot.

Platform or separate libs

You may wonder what the benefits of using Lagom over Akka and Play are.

Lagom is totally focused on microservices. You won’t be able to build a web application only using Lagom (which is possible if you use Play), You will probably use Lagom with AngularJS or any other modern front-end framework. But when it comes to Akka, Lagom hides the complexity of the actors by exposing only a minimal set of functions such asask.

Any developer who is new to the microservices concept should be able to use Lagom easily. If you don’t know Scala or your company isn’t ready to adopt it, you can still use Lagom with the official Java DSL. Add a little ConductR magic to it (a post on ConductR is coming soon), which will handle the dev-ops part for you and you have a Play & Akka based platform for microservices, which can be used as the main tool for a smooth transition from a monolith to a service-based architecture.

Personally, I see another benefit of using Lagom. Imagine a team of Java developers, who want to split a monolith application quickly, but at the same time are intending to try the Scala ecosystem. I think in the future Lagom will provide a Scala Lagom plugin so developers will be able to be productive with Java’s DSL for the old parts, but start using Scala’s DSL for the newly-designed microservices. This will happen in exactly the same ecosystem or stack, using the same technology and similar documentation but with different languages, all of them designed and tested by a well-known company.

Doesn’t it sound great? From the long-term business perspective, it’s a solid argument to persuade any business management to give it a try, to explore new areas of knowledge and find the most valuable solutions.

Summary

I believe that the potential that Lagom offers will become apparent in the next couple of months. The most important milestones leading to that potential will be the release of the 1.0 version (completing the Java’s DSL) and the first artifact of Scala’s DSL.

I also believe that after those milestones there will be another phase when the ecosystem will be extended even further to include other things such as additional database support. It is also worth remembering that Lagom is not a library or a framework. It is a full-blown platform delivering you a whole, ready-to-use toolset which can be a key player when it comes to making decisions about big changes in not-so-small companies.

Links