A Quick Overview of Slick 3.0

At ScalaC we’ve recently started adopting Slick 3.0. If you haven’t tried it yet, hopefully these notes will make the process go smoother.


Let’s start with something simple. While not a revolutionary change, the streamlined approach to configuring the database connection shows effort put into making Slick more pleasant to work with. The configuration can be specified entirely using Typesafe Config. An example application.conf can look like this:

The whole list of configuration options is available in the API docs.

Now all we need is to choose the appropriate config entry:

It could be even simpler if we decided a priori on what database engine to choose. Here we are making our application compatible with any database that matches the JdbcProfile. More elaborate multi-DB patterns can be found in this Activator template

Another feature to note here is that Slick now by default uses HikariCP for connection pooling (it still needs to provided as a build dependency). You can configure it to your needs, choose to disable it, or provide a third-party connection pool implementation, all via Typesafe Config.


Slick 3.0 has been dubbed “reactive” and as you might expect that means querying is now asynchronous. Thus all interactions with a database return futures instead of plain result types. However there’s also an intermediate type DBIOAction which is a monad-like trait wrapping the result. In code they are usually referred by the type alias DBIO and can be processed via the usual combinators like mapflatMapandThen. Let’s look at an example:

The query API remains mostly the same as in Slick 2.1 (there are some differences with regard to types and improved support for Option and outer joins, but in the usual way of working with Slick they might go unnoticed). When we’re done composing the query we call the implicit .result on it which transforms it into a DBIOAction.

The action represents the communication with the database, but that will not happen until the it is scheduled for execution with db.run() which will return a Future actual with the actual data (db is the object we created from configuration in the previous paragraph).

Queries that perform mutation (updates, inserts, deletes) are already actions – there’s no need to transform them. Actions can be composed together to be run in a single session. The individual actions forming the composite will be executed sequentially, like in the example below:

As you can see several actions are executed in one go. createIfNotExists is composed via map and flatMap transformations and the rest are linked together by >> (which is a shortcut for DBIOAction#andThen)


Having such actions chained together it should be easy to run them within a single DB transaction, and in fact it is. Consider this example:

All you need to do now is to run this action .transactionally:

Plain SQL

One last thing to note about actions is that SQL interpolation using sqlu"..."and sql"...".as[T] also returns actions and running them is not different from other examples:

Reactive Streams

Slick 3.0 also supports the Reactive Streams API. Any action that returns a collection can be converted to a DatabasePublisher (which implements org.reactivestreams.Publisher). Such Publisher can for example be used to construct a Flow with Akka Streams:

As I’m not going to cover Reactive Streams or Akka Streams here, that’s pretty much all there is to it :)

Type-checked SQL

Another interesting feature introduced in Slick 3.0 is its ability to type check hand-written SQL statements. Here’s how:

Things to note here are the use of tsql interpolator and StaticDatabaseConfig annotation which points the configuration file and the path in that configuration which defines the database used during compilation. Thanks to macro-magic Slick connect to this database to examine correctness of the queries with regard to syntax as well as types.

Hence an additional bonus that we don’t need to write .as[String] here (as with the regular sql interpolator), the proper types are inferred by Slick (your IDE might not be so kind).

Now, don’t lose your head (as I nearly did) if doesn’t work for you. There’s either a bug or an omission in the documentation in that in order for the macro to work it is required to have an slf4j implementation provided as a dependency, and if using logback, it has to be defined in runtime scope (I learned this from this blog post and there’s also an issue reported):

So let’s try to break some things to see how it performs:

Bad syntax:

Incorrect type:

Quite neat. However, I don’t see myself using this feature anytime soon. Making compilation dependent on a running DB instance and limited IDE support are blockers to me. Also it might hard to get this right if queries create or modify the schema. If you have a different opinion on this I’d very much welcome it in a comment.


That’s all the major changes I wanted to present in this post. I think we can agree that Slick is heading the right direction, not only in following the reactive trend, but also in making the API more expresive and consistent that consequently makes our code look better (it even shows in the imports).

There are of course things to look forward to in subsequent releases. The DSL still has its limitations and generated SQL is far from perfect (see my previous post which is still valid for the latest release). According to Stefan Zeiger query compilation will be improved in Slick 3.1 (see here)

Anyway, at ScalaC, we have already been using Slick 3.0 in some of our projects, and so far no major complaints. Since the query DSL hasn’t changed noticeably the transition is rather easy (that doesn’t necessarily mean migration) which is another reason I definitely recommend it for your next project.

You can check out the code used in this post from our repo.

Do you like this post? Want to stay updated? Follow us on Twitter or subscribe to our Feed.

Read more


Radek Tkaczyk

I am a quality-oriented hacker, developing in Scala since 2013

Latest Blogposts

26.03.2023 / By  Daria Karasek

Scaling Applications: Best Practices and Strategies

As your application gains popularity and usage, ensuring it can handle the increased traffic and demand is essential. Scaling your application increases its capacity to handle more users and data. This guide offers tips and best practices for scaling your applications effectively. Understand Your Application’s Architecture. Before you can scale your application, it’s essential to […]

24.03.2023 / By  Daria Karasek

Maximizing Your Apache Kafka Investment with Consulting Services

If you’re using Apache Kafka for your data streaming needs, you may face challenges or want to optimize your implementation. Our consulting services can guide you in improving your Kafka setup, addressing issues, implementing best practices, and utilizing new features. What is Apache Kafka? Firstly, companies use Apache Kafka as an open-source distributed event streaming […]

20.03.2023 / By  Daria Karasek

7 Tips for a Successful Development Career in Fintech

Fintech is one of the world’s most exciting and fastest-growing industries. The fintech industry is worth approximately $180 billion (Deloitte). The fast-growing space is projected to reach $174 billion in 2023. And is predicted to reach $188 billion by 2024. It offers a unique set of career opportunities for future fintech developers.  If you’re thinking about beginning a […]

Need a successful project?

Estimate project