In this post, we will look at how primitive Scala types such as Int and Long are represented down to the bytecode level. This will help us understand what the performance effects of using them in generic classes are. We will also explore the functionalities that the Scala compiler provides us for mitigating such performance penalties.

Furthermore, we will take a look at concrete benchmark results and convince ourselves that boxing/unboxing can have a significant effect on the latency of an application. Read more

In this post I will try to present what is GraphStage in Akka Streams. My goal is to describe when it’s useful and how to use it correctly. I will start with outlining key terminology, then proceed with simple example and after that the main use case will be covered. For the latter the most upvoted issue of akka-http will serve.

At the end, I will show how to properly test GraphStage. Besides of learning API you’ll gain deeper understanding how backpressure works. Read more

A very common scenario in many kinds of software is when the input data is potentially unlimited and it can appear at arbitrary intervals. The common way of handling such cases is using the Observer pattern in it’s imperative form – callbacks.

But this approach creates what’s commonly called “Callback Hell”. It’s a concept basically identical to the more commonly known “GOTO Hell” as they both mean erratic jumps in flow of control that can be very hard to reason about and work with. When writing an application we need to analyze all the callbacks to be sure e.g. we’re not using a value that can be changed by a callback at a random point of time.

But there exists a declarative approach to solving this problem that let’s us reason about it in a much more predictable and less chaotic fashion – Streams. Read more

For some time now Spark has been offering a Pipeline API (available in MLlib module) which facilitates building sequences of transformers and estimators in order to process the data and build a model. Moreover, Spark MLlib module ships with a plethora of custom transformers that make the process of data transformation easy and painless. But what happens if there is no transformer that supports a particular use case? Read more

conductR logo

Part of the success of modern application is targeting it globally – all over the world. It isn’t possible to run such application on a single machine, even with most powerful hardware.

Definitions like Distributed computing or Reactive applications were born in the process of IT globalization. Nowadays, applications run on multiple virtual machines distributed over multiple physical machines which are often spread around the world. Such applications aren’t easy to maintain.

Every service has different hardware requirements and dependencies, so it has to be deployed and upgraded continuously. In addition each machine has to be configured in such a way that allows communication within the cluster and with external services. Although Devops have helpful deployment tools like Chef, Puppet or Ansible, these tasks still aren’t easy, trust me. Read more


Any application sooner or later will fail. Imperative style programming usually handles this using side-effects by propagating exceptions and handling them later on. This approach introduces statefulness and deferring the error to outer bounds of the application. This creates hidden control-flow paths, that are difficult to reason about and debug properly when the code grows too much. Read more

When operating an Akka cluster the developer must consider how to handle network partitions (Split Brain scenarios) and machine crashes. There are multiple strategies to handle such erratic behavior and, after a deeper explanation of the problem we are facing, I will try to present them along with their pros and cons using the Split Brain Resolver in Akka, which is a part of the Reactive Platform. Read more

Scala is all about type-safety and making the compiler work for you. But what if we need to use SQL which is not a part of Scala? The compiler is not able to validate and type check raw queries. The solution for that problem is Domain Specific Language (DSL). We already have Slick that provides DSL for SQL and allows to work with a database just like with Scala collections.

However, Quill is going even further and supports compile-time query generation and validation. In this post I take a closer look at Quill and show an example application. Read more

Back in the day, business used to be much simpler. The only requirements were all-in-one straightforward solutions which usually ended up as monoliths. And because of that, supporting systems used to be much simpler too. However, over time the risk of ending up with a clumsy, too tightly-coupled system became greater and greater.

These days, markets are changing even more rapidly. You either adapt quickly or you go out of business. And software has no choice but to adapt to this new reality.

Changing the way you develop your system means changing your mindset, learning new solutions and applying the best practices. This is exactly why a dedicated platform might be of great help to newcomers.

Lagom platform

Your first microservices


Here comes the Lagom

Lagom is a platform that not only delivers a microservice framework, but also provides a complete toolset for developing applications, as well as creating, managing and monitoring your services. Despite still being a pretty young project (version 1.0.0-M2) which currently targets Java developers, we have decided to give it a try and couple it with the power of Scala.

The Platform is based on popular technologies, mostly from Scala’s ecosystem:

  • sbt (Scala’s build system, project definitions),
  • Play (REST endpoints, Guice dependency injections)
  • Akka (processing)
  • Cassandra (default data storage)

Most of the technology is hidden behind interfaces, and most newcomers won’t need to deal with it directly, although you can explore it you want to. This is extremely important when it comes to developers working on a monolithic J2EE codebase. Lagom will allow them to split a problematic domain easily into different services without having to learn a lot upfront. It also comes with a whole range of improvements – hot redeploy, easier testing and app management.

How is it build?

Since Lagom is built specifically for microservices, by nature the whole concept is synchronized – based on a familiar request-response cycle. Services communicate in a non-blocking way, thus making the whole app much more efficient. Inter-service communication can be done via HTTP, by injecting an API reference or invoking methods.

Typically, every microservice is divided into 2 parts: API and implementation. API is a formal contract. It tells other developers and teams what the given service can do for them and how to interact with it. Implementation is where the code actually lives. It is strongly decoupled from API so it can be evolved separately, as long as the contract holds.

To build an entry point we use a Service trait that allows us to define external endpoints and ServiceCalls which declare how to transform a request into a response. The process closely resembles the mechanism for preparing a service block and delegating each call to a proper service function.

When you access any path on your service then the Play router takes responsibility for matching it against a path descriptor and delegating the call to a proper method. Processing is based on Akka infrastructure, the message-driver actor system, which ensures that calls are processed concurrently, reliably and fast.

Service call declaration example:

Descriptor example:

When it comes to reliability, by default all failures are handled by Lagom’s exception handler and returned to you as HTTP 500 responses. The exception handler can easily be written and changed if you need to. This can be extremely useful if you would like to have total control over your failures.

By implementing your own ExceptionSerializer you can not only control the response codes but also the message body or different responses per accepted types.

Lagom uses a Persistence module backed by Cassandra, a scalable, fault-tolerant database – to support your storage. With the Persistence module, Lagom brings two main concepts to the whole PlatformEvent Sourcingand CQRS.

Event Sourcing is a way of operating on the storage as on a log. This means you can deal with immutable domain events from which you can derive a state. This way, implementation is much simpler and delivers a clear state history and a good write performance.

CQRS (Command Query Responsibility Segregation) has the advantage of separating read from write. This means we can treat both groups differently e.g. scale them differently or pay more attention to processing on the read side without impacting the write itself and vice versa.

If you want to stop Event Sourcing or decide you need a different database, you can simply introduce it yourself by using a proper driver.

Your first microservices

For demonstration purposes, we’ve created a new project that converts values between currencies. The project is based on 2 microservices. The first one is responsible for conversion and the second one delivers currency values. We should be able to modify the currency values on the fly and our changes should be reflected in any calls after modification.

Let’s see how our calculator declaration looks like:

As you can see we define only one REST endpoint for GET method"/api/calculator/exchange?fromValue&fromUnit&toUnit" .

This will allow us to get the result in a toUnit currency of fromValue value.

Besides the endpoint declaration, we have also used some builder methods for the descriptor:

  1. path param serializer from String to BigDecimal – this is needed to serialize our fromValue parameter while passing to the calculate method,
  2. set custom exception serializer – customize th e way we handle some of the exceptionally ended results,
  3. auto acl set to true – by default Lagom services do not have any Access Control Rules allowing us to access given resources and our request will be denied. By setting an auto acl we enable auto generation of access rules to our endpoints. Another possibility would be to define the rules yourself.

Our exchange rates microservice declaration is relatively similar to the previous one:

Here we define two endpoints:

  1. GET endpoint to retrieve current ratio for given from and to units"/api/exchangerates/:fromUnit/:toUnit"
  2. PUT endpoint to set a new ratio for given from and to units"/api/exchangerates/:fromUnit/:toUnit" which requires from us a body like { "rate": 1.23 }

As you can see our example is simple, based on REST and default validations (compilation time), but is complex enough to show the concept.

Custom parameter serializer

There are times when you would like to customize the way request/response parameters are handled. This can be easily achieved by preparing your own param serializer which makes use of Lagom’s path param serializer concept.

As you can see we have used a required factory method, which creates the required parameter for BigDecimal and declares two methods using Scala’s Function1. The first transforms the String url param to BigDecimal (serializes) and the second does the opposite – from BigDecimal it returns String representation.

This serializer allows us to serialize path params like ?value=1.23 to BigDecimals, by declaring a method that takes the BigDecimal as an input

Custom exception handler

Lagom gives you an effective default exception-handling mechanism. However, as your ecosystem grows you will probably want to prepare your own implementation. This approach introduces a variety of ways to keep fine-grained control over your stack.

Above is my attempt at preparing a custom exception handling. In actor systems like this, most exceptions will probably be caused by failed futures (it is common to return a failure from the future by ending it exceptionally).

In our implementation we wanted to have 2 flows: the first is a default flow, which returns the default message. The second processes CompletionExceptions which comes from the CompletableFuture. If we can match a ServerError in the second flow, we can return a custom message – otherwise a default one will be returned.

By doing this we expose those information we really care about.

As you can see a RawExceptionMessage (the main entity of the exception serializer) allows you to specify:

  1. an error code for a transport layer,
  2. a protocol,
  3. a response message.

As you may have noticed, our implementation may be really simple, but it illustrates the core idea of exception serializers very clearly

Let’s use it:

  1. download our example, unzip and navigate to unzipped folder,
  2. run sbt runAll,
  3. access REST url with POST method, filling proper valuesPOST /api/exchangerates/:fromUnit/:toUnite.g. /api/exchangerates/EUR/PLN with body {"rate": 1.23} ,
  4. call the calculation, once more fill the proper values/api/calculator/exchange?fromValue&fromUnit&toUnite.g./api/calculator/exchange?fromValue=1&fromUnit=EUR&fromUnit=PLN,
  5. you should receive a JSON response with message{ "value": 1.23, "currencyUnit": "PLN" } .

Congratulations, you have just made good use of our example!

Scala’s gluing code

As we’ve already mentioned, Lagom is pretty young (M2 artifact) and currently targets java developers. This is why we needed to use some glue code to provide more Scala-like syntax while coding our services. This type of code can be found in the utils project.

The snippet contains some implicit conversions from Scala functions to Java functional interfaces such as BiConsumer, BiFunction and a few more, but in some cases it was also necessary to specify the return type or the parameters explicitly, because some types of system have had some problems recognizing it (e.g. in ExchangeStorage).

The need for the Utils project should disappear once the Scala DSL is ready. In addition, at the same time some places in the exchange rates and the calculator projects should simplify things a lot.

Platform or separate libs

You may wonder what the benefits of using Lagom over Akka and Play are.

Lagom is totally focused on microservices. You won’t be able to build a web application only using Lagom (which is possible if you use Play), You will probably use Lagom with AngularJS or any other modern front-end framework. But when it comes to Akka, Lagom hides the complexity of the actors by exposing only a minimal set of functions such asask.

Any developer who is new to the microservices concept should be able to use Lagom easily. If you don’t know Scala or your company isn’t ready to adopt it, you can still use Lagom with the official Java DSL. Add a little ConductR magic to it (a post on ConductR is coming soon), which will handle the dev-ops part for you and you have a Play & Akka based platform for microservices, which can be used as the main tool for a smooth transition from a monolith to a service-based architecture.

Personally, I see another benefit of using Lagom. Imagine a team of Java developers, who want to split a monolith application quickly, but at the same time are intending to try the Scala ecosystem. I think in the future Lagom will provide a Scala Lagom plugin so developers will be able to be productive with Java’s DSL for the old parts, but start using Scala’s DSL for the newly-designed microservices. This will happen in exactly the same ecosystem or stack, using the same technology and similar documentation but with different languages, all of them designed and tested by a well-known company.

Doesn’t it sound great? From the long-term business perspective, it’s a solid argument to persuade any business management to give it a try, to explore new areas of knowledge and find the most valuable solutions.


I believe that the potential that Lagom offers will become apparent in the next couple of months. The most important milestones leading to that potential will be the release of the 1.0 version (completing the Java’s DSL) and the first artifact of Scala’s DSL.

I also believe that after those milestones there will be another phase when the ecosystem will be extended even further to include other things such as additional database support. It is also worth remembering that Lagom is not a library or a framework. It is a full-blown platform delivering you a whole, ready-to-use toolset which can be a key player when it comes to making decisions about big changes in not-so-small companies.