Improving your project with SBT

I believe that the work on keeping quality high should start from the very beginning of the project. When it comes to actual implementation, setting up build configuration is the very first thing one makes. The choice of tools has a huge impact on the process and results.

Additionally, the build itself is a program as well (and an important one!), so there is no excuse for avoiding good practices like readability, DRY, SOLID, etc.

That is why in this post I want to write down some good ideas about SBT usage that I’ve learned in both commercial and my own small projects, that help me write better code, keep build maintainable and improve projects in general.

Basics

In the case of simpler projects, we should find out that our project follows Maven-like layout similar to:

Layout of /src should be obvious for everyone who ever worked on projects with Maven-ish directory structure. We have 2 directories here, /src/main and /src/test, which in turn group source code by languages (so Java files would be under /java a subdirectory, Scala files within /scala, etc) and resources that are in resources directory (there are exceptions like Android build configuration, but we’ll leave that for another day).

Right now build.sbt and /project are more interesting to us. The former is the most important file to lookup by SBT when we run sbt command within our-project directory. /project is kind of a second-class citizen here: we can use it to empower build.sbt file and make sure that version of SBT used to build project will be consistent in all environments.

Simple build definition could look like this one:

What we see here is DSL created using SBT magic. As a matter of fact it is somewhat restricted Scala subset with several implicit imports already made for us. Those properties may look as something mutable, but underneath they are actually immutable values!

If we import project into our favorite IDE with SBT support we can check that all of those are actually sbt.SettingKey instances, and operators like := and ++= are used to create modified copies of those keys.

Those keys are then used underneath as arguments for something similar to project.settings(settings1, settings2), which returns modified instance of immutable project. So despite the mutable looking DSL everything stays immutable at the core.

How about modules?

Those pieces of information are quite useful when we consider multiproject. What are the reasons to do that? For me personally it’s about keeping things simple: it is easier to work on project when some responsibilities are clearly separated. Because order of compilation and direction of dependency is clearly defined we can use modules to enforce concepts like layered architecture, hexagonal architecture and (to a degree) boundary contexts.

Of course it comes with a price: maintenance of such build could be more complex and (as for now) SBT has trouble with caching dependency resolutions, meaning that checking libraries might take a while. However, I have seen more than once that keeping things tiny and clean is definitely worth it.

As for the issue: SBT developers try to address it with an experimental resolution caching feature. When it comes to snapshots one can also try to suppress resolution with offline := true setting.

Simple setup

Basic setup of a multiproject would look like that:

Of course, we have to make sure that there are modules a and b within modules directory which are following same Maven conventions like singular build described before.

That setup will load aggregating project on sbt named after our directory:

If we wanted to have more control over it, we can create it explicitly:

then:

loads root project on start as expected.

Magic names?

Let us stop here for a moment. When we list projects with sbt projects we’ll get:

How exactly SBT determined names for those? In earlier versions we have to define them explicitly with:

but currently we can use project macro which would lookup name of valand use it to populate module identifier and location:

Notice, that macro requires val here. We cannot just pass reference into some utility function and hope things to work. As such project is useful only to initiate Project definition, that we will customize from now on.

DRY in settings

It is difficult to overlook that something repeats in our configuration:

That doesn’t look good and easily can lead to some errors. For instance, a moment ago I forgot to copy paste

line into moduleB. What happened?

Modules A and B were build using different versions of Scala and as a result dependency couldn’t be resolved. This would never happen if settings common to all projects could be somehow shared, right? Let us try to create our first file within /project directory.

Then we can refer to common settings with:

Settings have a signature def settings(ss : sbt.Project.SettingsDefinition*) : sbt.Projectwhich is the reason we have to use vararg type ascription :_* to adjust Seqvalue.

build.sbt, project/ and modules

Another way of defining settings (better suited for things specific to a module) is… putting another build.sbt in module’s directory. Personally I try to keep all common settings and dependencies within project/* and use modules/*/build.sbt for libraries used only in one module. One has also keep in mid that project directory could be used only with root project. In case of modules it will be ignored.

One can also try to remove top level build.sbt completely and instead create build object like this:

As a matter of the fact that way of defining modules was/is quite popular in a lot of open source projects. However newer versions of SBT deprecated them and now build.sbt is the only option if we decide on newest versions.

Refactoring build?

While I saw some projects rely on Common.scala approach I also saw some (more compelling) where this blob was split into something more self-explanatory, like Dependencies and Settings. For instance something that I would use in my own project:

This way Scala version (and standard library), dependencies and resolvers would be kept in one place and separated from the settings. I’ve separated test dependencies from main ones to make sure that we won’t rely on unit test frameworks on production (libraryDependencies ++= testDeps map (_ % "test")). I’ve also added some scalac compiler options to enforce better quality of the code.

Testing

While we’re at testing we can also think about some small improvements. By default we have access to test task which would run JUnit/Scalatest/Specs2/whatever framework is fancy at a time. But is it enough? Making CI run tests only informs us that no test got broken, it doesn’t say how much of the code is checked.

Code coverage tools are great way to figure out which part of the codebase should get special attention. When you see that some critical part of your application is severely untested you might start to worry and this should motivate you to throw some tests there. Mind, that a number itself is meaningless. What would be the point of 100% coverage of a module made entirely of POJOs or plain case classes?

We should use coverage values reasonably, to decide which parts of the code needs special attention, which require more testing but requiring any level of coverage should be something that responsible programmers decide themselves. We all know that any form of coverage forced on developers against their will, would just lead up to meaningless tests that touch everything and check nothing. ;)

Ad rem. Configuring test coverage in SBT cannot be done out of the box. But it can be provided via SBT plugins. For this article I’ll use SCoverage, but there are plenty others to choose from.

First, let’s make sure that everyone running our project would use the same SBT version – similarly how Scala libraries’ packages are bound to specific Scala versions, SBT plugins are bound to SBT releases. And we would want other devs to just run build, not fight against it. We can define fixed SBT version by providing project/build.properties file with content like:

Then we can provide plugins for SBT within project/plugins.sbt

From this moment we can access SCoverage settings within our build definitions. In single module builds coverage would be enabled automatically. In multimodule however, we have to enable it in each module individually:

If we also want to enable coverage measurement by default (which I do NOT recommend, but let’s leave it for now) we can configure it with:

Now we can measure coverage by running the following command:

for a single build or

for a multiproject.

Why that way? Why not with just one command? Well, there are limitations to tools used to measure coverage. First, they are measuring coverage within files modified/recompiled since list rebuild (or at least they appear to). As a result you’ll often get coverage values that would make no sense unless you clean build prior to measurement.

Second, they have to be manually instructed to start tuning – this can be worked around by settings coverageEnabled := true as shown above, but as a side effect running application by sbt run might cause application to fail since it will still try to load some (absent in normal runtime) coverage dependency (and that’s why I recommend against it, and so does the author of a plugin). The last one is the need to manual trigger of coverage report documentation.

After that you read reports under target/scoveragre-report directory. You can also define minimal coverage for build to pass on CI using options like:

but as I said, first make sure that your team agrees. It is also worth knowing that coverage of some parts of code could be disabled with:

so that all kind of safe code (case classes, etc) or code that couldn’t be reasonably tested (once you finish extracting deps, you eventually end up with some place where you ultimately gather all the instances and inject them into the components) would not cause any disturbance.

Do it with style

I have seen few big and successful projects. What they had in common were the developers that wanted to keep quality high on each level. That means they all had their style guidelines that everyone was obligated to follow – but hardly anyone would try to learn formatting rules by heart!

Instead each of those projects relied on some automatic formatter that was not subject to opinion or mistake. Simply – you work here, your code will be formatted with X, EOT. That got rid of all discussions about indentations, where spaces should go and where shouldn’t and with good tests covering large part of the projects, reviewers could actually focus on more important things: whether code makes sense, whether it will be maintainable in the future, does it leave no place for misunderstandings etc.

That’s why some people consider defining formatter for the project as a rule 0 of project configuration.

What I used with great success was a combination of Scalariform and Scalastyle. The former is a formatter that (by default) runs on each compilation (which means that as long as our developers commit code they actually run we have consistent codebase with no additional effort). The latter is a style checker. As those two don’t cover exactly the same elements of style guide they compliment each other.

For instance, by default Scalariforms might merge some lines into one line (it doesn’t have a sense of line length limit, unfortunately), then Scalastyle might catch that and let us know that we need to handle this specific case ourselves (I admit that no line-length-limit is the greatest weakness of Scalariform).

To use them we start by adding plugins to SBT (again, in project/plugins.sbt):

(some versions of SBT would break if we hadn’t put those empty lines between plugins). Then Scalariform can be configured with:

Scalastyle has slightly different approach to configuration – it uses scalastyle-config.xml file. We can generate it with sbt scalastyleGenerateConfig command and then edit to our hearts content. Once we’re done we can check style with sbt scalastyle.

If you’re as crazy for quality as I am you would like a build to fail if style is not up to standards. You can achieve that by configuring:

and marking all offending warnings as errors within scalastyle-config.xml.

In case something break here and you don’t want it to be fixed (because you e.g. don’t agree with the tools on this particular case) you can suppress tools with:

or

Summary

Here we showed how we can startup (or improve) SBT project with modules that would clearly define the direction of dependencies between different parts of it, highlight architecture and add several tasks that would help us keeping code quality high. We would just run

make sure that tests pass, coverage is high enough, style guidelines are followed and code reviewers would be able to focus on the important stuff, the one that no automates could check for us.

Do you like this post? Want to stay updated? Follow us on Twitter or subscribe to our Feed.

See also

Authors

Mateusz Kubuszok

Latest Blogposts

15.09.2023 / By  Daria Karasek

Partners in Code: Scalac’s Approach to Scala Outsourcing

Think of Scala development as the art of gourmet cooking. While many can whip up a decent meal, crafting a Michelin-star-worthy dish demands expertise, precision, and passion. Similarly, while a host of developers might dabble in Scala, achieving true mastery requires a depth of experience and understanding. This is where the magic of scala outsourcing […]

07.09.2023 / By  Howard Gonzalez

TestContainers in Scala: Use Integration Tests for building your services

TL;DR Integration tests are frequently seen as the most expensive tests in most environments. The reason is that they usually require a higher level of preparation and procedures to make them appropriate for your particular infrastructure dependencies.  In addition, the time invested to make them work properly on the developer’s continuous integration/development environment is not […]

05.09.2023 / By  Daria Karasek

Scala 3 Data Transformation Library: Automating Data Transformations with ducktape

In the ever-evolving landscape of software development, there are tasks that, while necessary, often lack the thrill of innovation. One such task involves managing data transformations, a critical aspect across various domains. While JSON APIs serve as a prime example where such transformations are crucial, the focus of this discussion transcends JSON and centers on […]

Need a successful project?

Estimate project