Why you should test your application under specific circumstances

Nowadays, the internet is a global phenomenon. Most people use applications on smartphones for social media, email, games, trip advice, to buy tickets for events, city transport etc. Even elderly people use the Internet too. But when user numbers go up, the performance of applications goes down. And sometimes app owners and business analyzers fail to predict the correct number of users before releasing the product. This can create problems. For example, when an app is designed to work for 100 users at the same time but more users want to use it.

Black Friday curse

There are actually a lot of examples of this: Black Friday – the website with electronic staff – works well on a daily basis because it was designed for this purpose. However, when there’s a national holiday such as Black Friday (great discounts) the website switches off – too many users wanting to buy stuffand the server simply can’t support it.

A few years ago we saw a similar thing  with on-line voting in Poland, when the website went down. This is why we can now only vote physically, no longer on-line.

A few words about performance testing

Performance tests are carried out to assess the degree of fulfillment of the performance requirements by a system or module. Performance testing is a kind of non-functional testing.

There are 8 kinds of tests:

  1. Volume testing
  2. Load testing
  3. Stress testing
  4. Usability testing
  5. Maintainability testing
  6. Reliability testing
  7. Transferability testing
  8. Spike testing

How performance testing works in practice

When a user wants to do something in an application e.g. to change a page on a website, the request is sent from the user’s computer, the server loads this request and sends a  response to the client. Performance testing can check how the system will behave at a given load. If there are more users, our system has more requests and needs more time for the response, so performance goes down and users have to wait longer. If the server is overloaded, it turns off or changes the status to safe mode and the website stops working. Users can no longer access it,  which could well mean losses of money and clients. 

What performance looks like in statistics (data from 2015):
Amazon: a 100 millisecond better performance – 1% more profit (around 1 million dollars more per year)
Yahoo: a 400 millisecond better performance – 9% more profit (around 15 million dollars more per year)
Mozilla: reduction of page loading time by 2.2 seconds (60 million more browser downloads )

Example apps:

  • Jmeter
  • Gatling
  • Grinder
  • LoadRunner
  • LoadUI
  • Loadster

JMeter app:

JMeter is an open source application for creating performance testing. This is the most famous application from the above list. It has a nice UI and is very intuitive. It’s used successfully to simulate increases in traffic and also has the ability to generate test results, e.g. in graphic form or as a report. Creating tests with this tool is a pure pleasure.

Advantages of the JMeter tool:

  • Economical – it allows testing automation
  • Transferable – works on platforms supporting Java
  • Versatile – supports various protocols
  • Easy to learn – has extensive documentation
  • Current – is supported and developed
  • Low cost  – it’s free

Rights reserved

With JMeter we can test every application on the Internet, but remember – it’s illegal to test someone’s app on production without permission. So in my tests I will use the Scalac official website – I’ve created a copy of this website for a different environment and I can restore the server

How to install JMeter

Jmeter works on Java, so first we have to install Java version 12+. At the moment  Java 13.0.1 is the latest and you can download it from this link: https://www.oracle.com/technetwork/java/javase/downloads/index.html
After installation you can confirm it in the console. Type: java –version
The screen is shown below:

performance testing

Next, download the JMeter app from this link: https://jmeter.apache.org/download_jmeter.cgi
Unzip the rar. file, and open the “bin” folder, then JMeter.exe (for Windows) or JMeter.exec (for Mac)

Let’s do some performance testing

Remember: I’m only giving you an example, you should use your own website

The data we will be using in our tests:

  1. 30 users, 60 seconds, 1 iteration
  2. 150 users, 120 seconds, 1 iteration

First up, we should add a “thread group” to our test plan.  Right click on the test plan, then choose “add” -> “thread users” -> “thread group”

performance testing tools

Fill the fields in the thread:
Name: “Scalac”
Number of threads (users): “30”
Ramp-up period (seconds): “60”
Loop count: “1”

It should look like this:
performance testingNext, we have to add “http request defaults” with the get method in the “Scalac” thread. This request will always open the main page, so in the next steps we don’t have to add the main page,just the next path from the website. Right click on the test plan, then choose “add” -> “config element” -> “http request defaults”

performance testing tools
Then fill in the fields:
Name “opening website”

Protocol “https”
Server name or IP “staging8.scalac.io”

This is how it should look in the application

performance testing

Next add a new http request. Right click on the test plan, then choose “add” -> “sampler” -> “http request”

performance testing tools

Now we will add a “what we do” tab from the website to the next request. Field ”Server name or ip” we can leave this empty because we have it in “http request defaults”
Fill in these fields:
Name: “What we do tab”

Method: “Get”

Path: “/what-we-do/”

Like on this screen:

performance testingYou can do the same action with other tabs “about us”, “Careers”, “Blog” Please don’t add the “contact” tab because we will be using it later with the post method. Next, add a summary report to show the results of the test. Right click on the test plan, then choose “add” -> “listener” ->”summary report”

performance testing tools
Now we can start the test. Just click the “start” button:

performance testing tools

When the test is finished, we can check the summary report: The aAverage time of response to all requests from the server is 559 milliseconds, the minimum is 230ms, max is 2197, and there are 0% errors :) We also have statistics from individual tabs
tools for performance testingThe first tests are behind us :) Now we will use the post method to send data to the server (e.g. login, register to the app etc.).

Our test case will look like this:
1) Go to the “contact” page
2) Fill in two fields “email”, “message”
3) Send using the post method data from the fields

Firstly, we have to check what the developers have called these fields in the structure of the website. To check this, we have to use the console from the browser and send some data.
In the “email” field type “this is email”
In the “message” field type “this is message”

Click the “estimate my project” button on the website with the opened console and go to the “network” tab, then go to the “headers” tab in the network section and scroll down. You can see what the developers have called these two fields:
1) “your-email”
2) “your-message”

performance testing

Go back to the JMeter app and create a new http request. Type:
Name: “Contact tab – post data”
Method: “post”
Path: “/contact/”

Now you have to add the data to send the post request:

1) Email:
In the section “Send parameters with the request” click “add”
In the table “name”, type the name from the website structure which you found earlier in the console:
In the table “value”, type the email you want to send. I have entered “aaa@wp.pl

2) Message:
In the section “Send parameters with the request”, click “add”
In the table “name”, type the name from the website structure which you found earlier in the console:
In the table “value”, type the message you want to send. I have entered “performance testing”

Look at the screen:

tools for performance testing
Now you can add “View results tree” to check individual requests and track errors. Right click on the test plan, then choose “add” -> “listener” ->”View results tree”. Now we can start the test. Finally, the report looks like this:

performance testing
Let’s check it in “view results tree”. We had no errors, but we can look at the details for all the requests:

tools for performance testingAs you know, automatically testing without assertion is pointless. In Jmeter we can add a lot of different kinds of assertions. I will show you response code and duration

  1. Response code 200:

Right click on the test plan, and choose “add” -> “assertions” -> “response assertion”:

performance testing
Steps to add the assertion code:

  1. In the “field to test” field, check “response code”
  2. In the “pattern matching rules” field, check “contains”
  3. In the “patterns to test” click “add” and type “200”

Software testing

Now you can start the test, and you can check there are no errors, because the website is working correctly, so it returns a status 200 code. If not, there will be assertion errors

2. Duration assertion

You can set how many milliseconds will be accepted in the test. For example, if you set a 1000ms duration assertion and the test will be longer, the assertion will fail. Let’s add this assertion to our test plan. Right click on the test plan, and choose “add” -> “assertions” -> “duration assertion”:

Software testing
“Duration in milliseconds” set to “1000”
Software testing
Now you can start the test, and open the “view results tree” from the test plan. As you can see, all the tests which have a time of more than 1000 milliseconds are failed, because the added assertion works correctly

tools for performance testing

How to Generate a graphical report in JMeter

Graphic reports are more transparent than other reports, because we can see the statistics in a diagram or chart, and we can compare all the tests. It’s not easy in JMeter, but to generate a graphic report from jmeter we have to change the code in the user.properties file. You need to copy this file from the “bin” folder from jmeter, to the folder you want (for example: documents/scalac)

Open this file, go to the “reporting configuration” tab, delete some “#” to uncomment the code and change the time to 600000 milliseconds (the red arrow on the screen). It should look like this:

performance testing jmeter
The yellow arrow is a code copied from JMeter. Please open the “tools” tab in JMeter, and click on the “export transactions result” option. Like this:

testing jmeterAnd copy the code from “export transactions result”

performance testing

Then you can paste it to the user.properties file (the yellow arrow on the screen)

performance testing jmeter

Now save the user.properties file, and go to the console (terminal) and type this code:

./jmeter -n -t [address to your test plan file] -l [address where jmeter will create the csv file] -q [address to user.properties file] -e -o [folder where you want to create the report]

For example, my code in the console looks like this:

./jmeter -n -t /Users/bartoszkuczera/Documents/scalac/scalac.jmx -l /Users/bartoszkuczera/Documents/scala/scalac.csv -q /Users/bartoszkuczera/Documents/scalac/user.properties -e -o /Users/bartoszkuczera/Documents/scalac/report

Open the report folder, then the index.html file. You can see a lot of options in the graphic report

performance index
Go to the “charts” then to the “over time”. I think this graphic report is the best, because you have all the tests shown at the same time


Now you know why performance testing is so important- bad performance increases the product and design risk. yYou also now know how to do performance testing in the JMeter app and how to generate reports from the tests. To finish, here’s a funny meme on the subject of performance :)

Black Friday performance testing

Check out also:

Why do we need orchestration?

As Federico Garcia Lorca once said, besides black art, there is only automation and mechanization. And though for some of you, the title of this article might sound like black magic indeed, I will do my best to give you a sense of what these concepts mean.

Human nature is not as complicated as it may seem. In general, we are not keen on repeating things over and over again. Our laziness naturally bringths us closer to the automation of everything that might be automated. 

Read more

Introduction to Machine Learning Robustness

Machine Learning models are great and powerful. However, the usual characteristics of regular training can lead to serious consequences in terms of security and safety. In this blog post, we will take a step back to revisit a regular optimization problem using an example of a binary classification. We will show a way to create more robust and stable models that use features that are more meaningful to humans. In our experiments, we will do a simple binary classification to recognize the digits zero and one from the MNIST dataset. 

Firstly, we will introduce why regular training is not perfect. Next, we will briefly sum up what regular training looks like, and then we will outline more robust training. Finally, we will show the implementation of our experiments and our final results (the GitHub repository is here).


Machine Learning models have achieved extraordinary results across a variety of domains such as computer vision, speech recognition, and natural language modeling. Using a training dataset, a model looks for any correlations between features that are useful in predictions. Every deep neural network has millions of weak patterns, which interact, and on average, give the best results. Nowadays, models in use are huge regardless of the domain e.g. the Inception-V4 (computer vision) contains around 55 million parameters, the DeepSpeech-2 (speech recognition) over 100 million parameters, or the GPT-2 (NLP language model) over 1.5 billion parameters. To feed such big models, we are forced to use unsupervised (or semi-supervised) learning. As a result, we often end up with (nearly) black-box models, which make decisions using tons of well-generalized weak features, which are not interpretable to humans.  This fundamental property can lead to severe and dangerous consequences in the security and safety of deep neural networks in particular.

Why should we care about weak features?

The key is that they are (rather) imperceptible to humans. From the perspective of security, if you know how to fake weak features in input data, you can invisibly take full control of model predictions. This method is called an adversarial attack. This is based on finding a close perturbation of the input (commonly using a gradient), which crosses the decision boundary, and changes the prediction (sometimes to a chosen target, called targeted attacks). Unfortunately, most of the state-of-the-art models, regardless of the domain (image classification, speech recognition, object detection, malware detection), are vulnerable to this kind of attack. In fact, you do not even need to have access to the model itself. The models are so unstable that a rough model approximation is enough to fool them (transferability in black-box attacks).

Safety is another perspective.

Our incorrect assumption that training datasets reflect a true distribution sometimes comes back to haunt us (intensified by data poisoning). In deep neural networks, changes in distribution can unpredictably trigger weak features. This usually gives a slight decline in performance on average, which is fine. However, this decrease often comes as a result of rare events, in which the model will without a doubt offer wrong predictions  (think about incidents regarding self-driving cars).

Regular Binary Classification

Let’s summarize our regular training. The model based on the    input makes a hypothesis    to predict the correct target   , where    in the binary classification. The binary loss function can be simplified to the one-argument function  , and we can use the elegant hinge loss, which is known as the soft-margin in the SVM. To fully satisfy the loss, the model has to not only ideally separate classes but also preserve a sufficient margin between them (figures code left and right).

For our experiment, we are using a simple linear classifier, so the model has only a single vector  and bias The landscape of loss in terms of    for non-linear models is highly irregular (left figure, code here), however in our case, it is just a straight line (right figure, code here).

Using a dataset and an optimization method gradient descent, we follow the gradient and look for the model parameters, which minimize the loss function:

Super straightforward, right? This is our regular training. We minimize the expected value of the loss. Therefore, the model is looking for any (even weak) correlations, which improve the performance on average (no matter how disastrous its predictions sometimes are). 

Towards Robust Binary Classification

As we mentioned in the introduction, machine learning models (deep neural networks in particular) are sensitive to small changes. Therefore now, we allow the input to be perturbed a little bit. We are not interested in patterns which concern exclusively but the delta neighborhood around . In consequence, we face the min-max problem and two related challenges.

Firstly, how can we construct valid perturbations ? We want to formulate a space (epsilon-neighborhood) around  (figure below, code here), which sustains human understanding about this space. In our case, if a point describes the digit one, then we have to guarantee that each perturbation looks like the digit one. We do not know how to do this formally. However, we can (sure enough) assume that the small norm perturbations are correct  . In our experiments, we are using the infinity norm (others are common too). These tiny boxes are neat because valid perturbations are in the range to , independent of dimension.

The second challenge is how to solve the inner maximization problem. Most advanced machine learning models are highly non-linear, so this is tough in general. There are several methods to approximate the solution (a lower or upper bound), which we are going to cover in upcoming blog posts. Hopefully, in the linear case, we can easily solve this exactly because the loss directly depends on , our simplified loss landscape (formal details here).

The basic intuition is that we do not penalize high weights, which are far from the decision boundary (in contrast to regularization However, this is far from a complete explanation. 

Firstly, the loss does not penalize if a classifier makes a mistake that is close to the decision boundary (left figure, code here). The error tolerance dynamically changes with regards to model weights, shifting the loss curve. As opposed  to regular training, we do not force a strictly defined margin to be preserved, which sometimes can not be achieved.

Secondly, the back propagation is different (right figure, code here). The gradient is not only diminished, but also if it is smaller than epsilon, it can even change the sign. As a result, the weights that are smaller than epsilon are gently wiped off. 

Finally, our goal is to minimize the expected value of the loss not only of input , but the entire subspace around :


As we have already mentioned, today, we are doing a simple binary classification. Let’s briefly present regular training (experiment_regular.py). We reduce the MNIST dataset to include exclusively only the digits zero and one (transforming the original dataset here). We build a regular linear classifier, SGD optimizer, and hinge loss. We work with the high-level Keras under Tensorflow 2.0 with eager execution (PyTorch alike). 

In contrast to just invoking the built-in fit method, we build the custom routine to have full access to any variable or gradient. We abstract the train_step, which processes a single batch. We build several callbacks to collect partial results for further analysis.

The robust training is similar. The crucial change is the customized loss, which contains additionally    term. More details are in experiment_robust.py.


We do a binary classification to recognize the digits zero and one from the MNIST dataset. We train regular and robust models using presented scripts). Our regular model achieves super results (robust models are slightly worse). 

We have only single mistakes (figure below, code here). A dot around a digit causes general confusion. Nevertheless, we have incredibly precise classifiers. We have achieved human performance in recognizing handwritten digits zero and one, haven’t we? 

Not really. We can precisely predict (without overfitting, take a look at the tiny gap between the train and the test results), that’s all. We are absolutely far from human reasoning when it comes to recognizing the digits zero and one. To demonstrate this, we can check the model weights (figure below, code here). Our classifiers are linear therefore the reasoning is straightforward. You can imagine a kind of a single stamp. The decision moves toward the  digit one if black pixels are activated (and to the digit zero if white). The regular model contains a huge amount of weak features, which do not make sense for us, but generalize well. In contrast, robust models  wipe out weak features (which are smaller than epsilon), and stick with more robust and human aligned features.

Now, we will make several white-box adversarial attacks and try to fool our models. We evaluate the models on perturbed test datasets, in which each sample is moved directly towards a decision boundary. In our linear case, the perturbations can be easily defined as:

where we check out several epsilons (figure below, code here).

As we expect, the regular model is brittle due to the huge amount of weak features. Below, we present the misclassified samples which are closest to the boundary decision (predicted logits are around zero, figure code here). Now, we can understand how perturbed images are so readable to humans  e.g.  where the regular classifier has the accuracy around zero. The regular classifier absolutely does not know what the digit zero or one looks like. In contrast, robust models are generally confused about the different digit structures, nonetheless, the patterns are more robust and understandable to us.

In the end, we present a slightly different perspective. Take a look at the logit distributions of misclassified samples. We see that the regular model is extremely confident about wrong predictions. In contrast, the robust model (even if it is fooled) is uncertain, because logits tend to be close to zero. Robust models seem to be more reliable (figure code here).


Machine Learning models are great and powerful. However, the characteristics of regular training can lead to serious consequences in terms of the security and safety of deep neural networks in particular. In this blog post, we have shown what simple robust training can look like. This is only a simple binary case, which (we hope) gives more intuition about the drawbacks of regular training, and shows why these problems are so vital. Of course, things are more complex if we want to force deep neural networks to be more robust because performance rapidly declines, and models become useless. Nonetheless, the machine learning community is working hard to popularize and develop the idea of a robust machine learning, as this blog post has tried to do.

In the next blog posts, we will present how to achieve more robust deep neural networks, and how they can be super meaningful.


Blog posts:



You might also like

You have probably encountered this problem while working with SBT and bigger projects. I’m talking about compilation times and test execution times, in other words, having to wait instead of working. Imagine working with a build tool that rebuilds only what is necessary, using a distributed cache, so if module A is built by one of your team members you won’t have to do it again. Or imagine being able to run builds of different parts of your project in parallel, or run tests only for the affected code that has been changed. Sounds promising right? That’s why, in this tutorial, I will be showing you what Bazel build is and how to set your project up in Scala.

Introduction to Bazel build

Bazel is a build tool from Google, which allows you to easily manage builds and tests in huge projects. This tool gives huge flexibility when it comes to the configuration and granularity of the basic build unit. It can be a set of packages, one package or even just one file. The basic build unit is called a target, the target is an instance of rules. A rule is a function that has a set of inputs and outputs; if the inputs do not change then the outputs stay the same. By having more targets (the disadvantage of this solution is having more build files to maintain) where not all of them depend on each other, more builds can run in parallel because Bazel build uses incremental builds, so it rebuilds only the part of the dependency graph that has been changed, as well as only running tests for the affected parts.

It can distribute, build and test actions across multiple machines, and then build and reuse previously done cached work, which makes your builds even more scalable.

Bazel can also print out a dependency graph, the results of which can be visualized on this page webgraphviz.com

So if your project takes a lot of time to build, and you don’t want to waste any more time, this tool is what you need. Speed up your compile times, speed up your tests, speed up your whole team’s work!

In this tutorial, we will be using Bazel version 1.0.0.

Project structure

We will be working on a project with this structure:
├── bazeltest
│   ├── BUILD
│   └── src
│       ├── main
│       │ └── scala
│       │ └── bazeltest
│       │     └── Main.scala
│       └── test
│           └── scala
│               └── bazeltest
│                   └── MainSpec.scala
├── dependencies.yaml
└── othermodule
    ├── BUILD
    └── src
        ├── main
        │   └── scala
        │       └── othermodule
        │           └── Worker.scala
        └── test
            └── scala
                └── othermodule
                    └── WorkerSpec.scala

So we have two modules called: bazeltest and othermodule.
Bazeltest will depend on othermodule.

Workspace file setup

Each project has one WORKSPACE file, where we will define things like Scala version and dependencies. If in the project directory there is a  subdirectory with a WORKSPACE file, then while doing our builds this subdirectory will be omitted.
To make it work with Scala, then let’s take an already prepared boilerplate WORKSPACE file from:

Be aware of the change in rules_scala_version. Rules_scala_version is commit’s sha. So if you want to use the newest version of the rules, check GitHub repository and copy-paste commit’s sha.
We also have to add at the end of the file:
load(“//3rdparty:workspace.bzl”, “maven_dependencies”)

This will be used by a third-party tool called bazel-deps, but we will come back to this at the next step.

So after the changes:
rules_scala_version=“0f89c210ade8f4320017daf718a61de3c1ac4773” # update this as needed

load(“@bazel_tools//tools/build_defs/repo:http.bzl”, “http_archive”)
    name = “io_bazel_rules_scala”,
    strip_prefix = “rules_scala-%s” % rules_scala_version,
   type = “zip”,
    url = “https://github.com/bazelbuild/rules_scala/archive/%s.zip” % rules_scala_version,

load(“@io_bazel_rules_scala//scala:toolchains.bzl”, “scala_register_toolchains”)

load(“@io_bazel_rules_scala//scala:scala.bzl”, “scala_repositories”)

# bazel-skylib 0.8.0 released 2019.03.20 (https://github.com/bazelbuild/bazel-skylib/releases/tag/0.8.0)
skylib_version = “0.8.0”
    name = “bazel_skylib”,
   type = “tar.gz”,
    url = “https://github.com/bazelbuild/bazel-skylib/releases/download/{}/bazel-skylib.{}.tar.gz”.format (skylib_version, skylib_version),
    sha256 = “2ef429f5d7ce7111263289644d233707dba35e39696377ebab8b0bc701f7818e”,

load(“//3rdparty:workspace.bzl”, “maven_dependencies”)

      “scala_compiler”: “f34e9119f45abd41e85b9e121ba19dd9288b3b4af7f7047e86dc70236708d170”,
      “scala_library”: “321fb55685635c931eba4bc0d7668349da3f2c09aee2de93a70566066ff25c28”,
      “scala_reflect”: “4d6405395c4599ce04cea08ba082339e3e42135de9aae2923c9f5367e957315a”

If you wish to set a specific Scala version, add code from: https://github.com/bazelbuild/rules_scala#getting-started



       "scala_compiler": "f34e9119f45abd41e85b9e121ba19dd9288b3b4af7f7047e86dc70236708d170",

       "scala_library": "321fb55685635c931eba4bc0d7668349da3f2c09aee2de93a70566066ff25c28",

       "scala_reflect": "4d6405395c4599ce04cea08ba082339e3e42135de9aae2923c9f5367e957315a"



In this file, we will setup the Scala rules and everything else that is needed to compile the Scala project.

BUILD files setup

To write BUILD files we will use the following methods:
  1. load – which loads the Bazel Scala rules, and extensions
  2. scala_binary – generates a Scala executable
  3. scala_library –  generates a .jar file from Scala source files.
  4. scala_test – generates a Scala executable that runs unit test suites written using the scalatest library.

Start from the BUILD file in a project folder.
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_binary")
    name = "App",
    deps = [
    main_class = "bazeltest.Main"
  We have named it App, just one dependency to the bazeltest package. In deps, we list our dependencies, where our own modules or third party can be. Main_class is our entry point.

In the bazeltest package BUILD file:
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_test")
   name = "bazeltest",
   srcs = ["src/main/scala/bazeltest/Main.scala"],
   deps = [
   visibility = ["//:__pkg__"]
    name = "test-main",
    srcs = ["src/test/scala/bazeltest/MainSpec.scala"],
    deps = [":bazeltest"]

Our Main.scala file will use some external third party dependency such as joda date time, and Worker from the subpack package. In srcs we set our Main.scala file, but it could be a list of files, listed one by one or a  matching path pattern for example:
( then we use glob ), could even be a package with all the subpackages, such as:
and in deps all the necessary dependencies, so for this example our own sub pack package plus third part joda date time. For now, it points to the 3rdparty folder which does not exist yet, this will be done at one of the next steps so don’t worry. Visibility is used to define which other targets can use this target as a dependency, in this example, we define a project folder containing the main BUILD file.
Now the BUILD file for othermodule:
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_test")
     name = "othermodule",
     srcs = glob(["src/main/scala/othermodule/*.scala"]),
     deps = [],
     visibility = ["//bazeltest:__pkg__"]
    name = "test-othermodule",
    srcs = ["src/test/scala/othermodule/WorkerSpec.scala"],
    deps = [":othermodule"]
Here we have set up a visibility param to the bazeltest package. So only this package can read from this one. If other packages try to reach this, we will see an error.  


We will use a third-party tool for this: https://github.com/johnynek/bazel-deps
Open the dependencies.yaml file and put this there:
 buildHeader: [
   "load(\"@io_bazel_rules_scala//scala:scala_import.bzl\", \"scala_import\")",
   "load(\"@io_bazel_rules_scala//scala:scala.bzl\", \"scala_library\", \"scala_binary\", \"scala_test\")"
 languages: [ "java", "scala:2.12.8" ]
 resolverType: "coursier"
   - id: "mavencentral"
     type: "default"
     url: https://repo.maven.apache.org/maven2/
   - id: "hmrc"
     type: "default"
     url: https://hmrc.bintray.com/releases
 strictVisibility: true
 transitivity: runtime_deps
 versionConflictPolicy: highest
     lang: java
     version: "2.10.4"
     lang: scala
     version: "3.9.0"
     lang: scala
     version: "10.1.7"
     lang: scala
     version: "3.0.8"
     lang: scala/unmangled
     target: "@io_bazel_rules_scala_scala_library//:io_bazel_rules_scala_scala_library"
     lang: scala/unmangled
     target: "@io_bazel_rules_scala_scala_reflect//:io_bazel_rules_scala_scala_reflect"
     lang: scala/unmangled
     target: "@io_bazel_rules_scala_scala_compiler//:io_bazel_rules_scala_scala_compiler"
     lang: scala
     lang: scala
(Language is always required and may be one of java, Scala, Scala/unmangled. This is important, if you define an invalid language then errors will occur. Replacements are used for internal targets instead of Maven ones.)  

Save the system variable of this project path, for example (working on a Mac): export MY_PROJ_DIR=`pwd`
We will need this in a minute.

  Clone https://github.com/johnynek/bazel-deps and enter the bazel-deps folder. Ensure that this tool uses the same rules_scala commit sha.
Open the WORKSPACE file inside the bazel-deps and look for this:

    name = "io_bazel_rules_scala",

    remote = "https://github.com/bazelbuild/rules_scala",

    commit = "0f89c210ade8f4320017daf718a61de3c1ac4773" # HEAD as of 2019-10-17, update this as needed

  Commit is of course what we need to change ( if it is different than in our WORKSPACE file in rules_scala_version ).

  bazel run //:parse generate -- --repo-root "$MY_PROJ_DIR" --sha-file 3rdparty/workspace.bzl --deps dependencies.yaml

  This will download dependencies into a 3rdparty folder into your project directory.
INFO: Analyzed target //:parse (0 packages loaded, 0 targets configured).

INFO: Found 1 target...

Target //src/scala/com/github/johnynek/bazel_deps:parseproject up-to-date:



INFO: Elapsed time: 0.168s, Critical Path: 0.01s

INFO: 0 processes.

INFO: Build completed successfully, 1 total action

INFO: Build completed successfully, 1 total action

wrote 26 targets in 8 BUILD files

The first run

Before doing the first run, let’s implement our Main and Worker classes.
package bazeltest
import othermodule.Worker
import org.joda.time.DateTime
object Main extends App {
  println("IN MAIN now: "+DateTime.now().plusYears(11))
  val worker = new Worker
  def status(): String = "OKi"
package othermodule
class Worker {
  def doSomething() : Int = {
    println("Doing something")
  def pureFunc(): String = "ABC"
bazel run //:App
INFO: Analyzed target //:App (1 packages loaded, 2 targets configured).

INFO: Found 1 target...

INFO: From Linking external/com_google_protobuf/libprotobuf_lite.a [for host]:

/Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/io_win32.o has no symbols

INFO: From Linking external/com_google_protobuf/libprotobuf.a [for host]:

/Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf/error_listener.o has no symbols

INFO: From Building external/com_google_protobuf/libprotobuf_java.jar (122 source files, 1 source jar):

warning: -parameters is not supported for target value 1.7. Use 1.8 or later.

Target //:App up-to-date:



INFO: Elapsed time: 52.246s, Critical Path: 23.22s

INFO: 194 processes: 189 darwin-sandbox, 5 worker.

INFO: Build completed successfully, 198 total actions

INFO: Build completed successfully, 198 total actions

IN MAIN now: 2030-10-11T11:26:07.533+01:00

Doing something
The first run takes some time because it has to download the dependencies, so don’t worry.

Unit tests

Now let’s write some simple unit tests:
package bazeltest
import org.scalatest._
class MainSpec extends FlatSpec with Matchers {
  "status" should "return OK" in {
    Main.status() shouldBe "OKi"
package othermodule
import org.scalatest._
class WorkerSpec extends FlatSpec with Matchers {
    val worker = new Worker()
      "do something" should "return 12345" in {
        worker.doSomething() shouldBe 12345
      "pureFunc" should "return ABC" in {
        worker.pureFunc() shouldBe "ABC"

And run them: bazel test //bazeltest:test-main
INFO: Analyzed target //bazeltest:test-main (0 packages loaded, 0 targets configured).

INFO: Found 1 test target...

Target //bazeltest:test-main up-to-date:



INFO: Elapsed time: 1.047s, Critical Path: 0.89s

INFO: 3 processes: 2 darwin-sandbox, 1 worker.

INFO: Build completed successfully, 4 total actions

//bazeltest:test-main                                                    PASSED in 0.5s


Executed 1 out of 1 test: 1 test passes.

INFO: Build completed successfully, 4 total actions
bazel test //othermodule:test-othermodule

INFO: Analyzed target //othermodule:test-othermodule (0 packages loaded, 0 targets configured).

INFO: Found 1 test target...

Target //othermodule:test-othermodule up-to-date:



INFO: Elapsed time: 1.438s, Critical Path: 1.29s

INFO: 2 processes: 1 darwin-sandbox, 1 worker.

INFO: Build completed successfully, 3 total actions

//othermodule:test-othermodule                                           PASSED in 0.6s


Executed 1 out of 1 test: 1 test passes.

INFO: Build completed successfully, 3 total actions
Try now to change the status method from Main, to return “OK” instead of “OKi”. Run the tests again: bazel test //bazeltest:test-main
INFO: Analyzed target //bazeltest:test-main (0 packages loaded, 0 targets configured).

INFO: Found 1 test target...

FAIL: //bazeltest:test-main (see /private/var/tmp/_bazel_maciejbak/16727409c9f0575889b09993f53ce424/execroot/__main__/bazel-out/darwin-fastbuild/testlogs/bazeltest/test-main/test.log)

Target //bazeltest:test-main up-to-date:



INFO: Elapsed time: 1.114s, Critical Path: 0.96s

INFO: 3 processes: 2 darwin-sandbox, 1 worker.

INFO: Build completed, 1 test FAILED, 4 total actions

//bazeltest:test-main                                                    FAILED in 0.6s



INFO: Build completed, 1 test FAILED, 4 total actions
bazel test //othermodule:test-othermodule
INFO: Analyzed target //othermodule:test-othermodule (0 packages loaded, 0 targets configured).

INFO: Found 1 test target...

Target //othermodule:test-othermodule up-to-date:



INFO: Elapsed time: 0.150s, Critical Path: 0.00s

INFO: 0 processes.

INFO: Build completed successfully, 1 total action

//othermodule:test-othermodule                                  (cached) PASSED in 0.6s


Executed 0 out of 1 test: 1 test passes.

INFO: Build completed successfully, 1 total action
Bazel build sees what has been changed, and runs tests only for the affected classes. So test results for othermodule are taken from the cache, and only the main tests run. The test failed because we didn’t change the results in the Spec file, so the change expected the result in the test to the Main.status() shouldBe “OK”. Run tests again: bazel test //bazeltest:test-main
INFO: Analyzed target //bazeltest:test-main (0 packages loaded, 0 targets configured).

INFO: Found 1 test target...

Target //bazeltest:test-main up-to-date:



INFO: Elapsed time: 1.389s, Critical Path: 1.22s

INFO: 2 processes: 1 darwin-sandbox, 1 worker.

INFO: Build completed successfully, 3 total actions

//bazeltest:test-main                                                    PASSED in 0.6s


Executed 1 out of 1 test: 1 test passes.

INFO: Build completed successfully, 3 total actions

Dependency graph

We can easily visualize our dependency graph: In the command line run: bazel query --noimplicit_deps "deps(//:App)" --output graph
digraph mygraph {

  node [shape=box];


  "//:App" -> "//bazeltest:bazeltest"


  "//bazeltest:bazeltest" -> "//bazeltest:src/main/scala/bazeltest/Main.scala"

  "//bazeltest:bazeltest" -> "//3rdparty/jvm/joda_time:joda_time"

  "//bazeltest:bazeltest" -> "//othermodule:othermodule"


  "//othermodule:othermodule" -> "//othermodule:src/main/scala/othermodule/Worker.scala"



  "//3rdparty/jvm/joda_time:joda_time" -> "//external:jar/joda_time/joda_time"


  "//external:jar/joda_time/joda_time" -> "@joda_time_joda_time//jar:jar"



  "@joda_time_joda_time//jar:jar" -> "@joda_time_joda_time//jar:joda_time_joda_time.jar\n@joda_time_joda_time//jar:joda_time_joda_time-sources.jar"



Loading: 12 packages loaded

Paste results to webgraphviz.com Bazel build Scala graph

Generate jar

bazel build //:App
INFO: Analyzed target //:App (0 packages loaded, 0 targets configured).

INFO: Found 1 target...

Target //:App up-to-date:



INFO: Elapsed time: 0.085s, Critical Path: 0.00s

INFO: 0 processes.

INFO: Build completed successfully, 1 total action

Bazel build: Summary

In this post, we showed what is bazel, when to use it, and how to make basic configuration. It can take some time to properly set up complex projects using bazel build, but I guarantee you, in the end, it will speed up the whole team’s work.

Useful links

  1. Official Bazel build documentation https://docs.bazel.build/versions/1.0.0/bazel-overview.html
  2. Building Scala with Bazel build- Natan Silnitsky https://www.youtube.com/watch?v=K2Ytk0S4PF0
  3. Building Java Applications with Bazel https://www.baeldung.com/bazel-build-tool

There are plenty of frameworks you can base your application on in Scala, and every one offers a different flavor of the language with its own set of patterns and solutions. Whatever your preference is, we all ultimately want the same: simple and powerful tools enabling us to write easily testable and reliable applications. A  new library has recently joined the competition. ZIO, with its first stable release coming soon, gives you a high-performance functional programming toolbox and lowers the entry barrier for beginners by dropping unnecessary jargon. In this blog post, you will learn how to structure a modular application using ZIO.

Designing a Tic-Tac-Toe game

Most command-line programs are stateless and rightfully so, as they can be easily integrated into scripts and chained via shell pipes. However, for this article, we need a slightly more complicated domain. So let’s write a Tic-Tac-Toe game. It will make the example more entertaining while still keeping it relatively simple to follow. Firstly, a few assumptions about our game. It will be a command-line application, so the game will be rendered into the console and the user will interact with it via text commands. The application will be divided into several modes, where a mode is defined by its state and a list of commands available to the user. Our program will read from the console, modify the state accordingly and write to the console in a loop. We’d also like to clear the console before each frame. For each of these concerns we will create a separate module with dependencies on other modules as depicted below:

TicTacToe game ZIO

Basic program

The basic building block of ZIO applications is the  ZIO[R, E, A] type, which describes effective computation, where:

  •  R is the type of environment required to run the effect
  •  E is the type of error that may be produced by the effect
  •  A is the type of value that may be produced by the effect

ZIO was designed around the idea of programming to an interface. Our application can be divided into smaller modules and any dependencies are expressed as constraints for the environment type R. First of all, we have to add the dependency on ZIO to SBT build:

libraryDependencies += "dev.zio" %% "zio" % "1.0.0-RC16"

We will start with a simple program printing the “TicTacToe game!” and gradually expand it.

package ioleo.tictactoe

import zio.{console, App , ZEnv, ZIO}
import zio.console.Console

object TicTacToe extends App {

  val program: ZIO[Console, Nothing, Unit] =
    console.putStrLn("TicTacToe game!")

  def run(args: List[String]): ZIO[ZEnv, Nothing, Int] =
        error => console.putStrLn(s"Execution failed with: $error") *> ZIO.succeed(1)
      , _ => ZIO.succeed(0)

To make our lives easier ZIO provides the  App trait. All we need to do is to implement the run method. In our case, we can ignore the arguments the program is run with and return a simple program printing to the console. The program will be run in DefaultRuntime which provides the default environment with Blocking, Clock, Console, Random and System services. We can run this program using SBT: sbt tictactoe/runMain ioleo.tictactoe.TicTacToe.

Testing effects

ZIO also provides its own testing framework with features such as composable assertions, precise failure reporting, out-of-the-box support for effects and lightweight mocking framework (without reflection). First of all, we have to add the required dependencies and configuration to our SBT build:

libraryDependencies ++= Seq(
  "dev.zio" %% "zio-test" % "1.0.0-RC16" % "test",
  "dev.zio" %% "zio-test-sbt" % "1.0.0-RC16" % "test"

testFrameworks := Seq(new TestFramework("zio.test.sbt.ZTestFramework"))

Now, we can define our first specification.

package ioleo.tictactoe

import zio.test.{assert, suite, testM, DefaultRunnableSpec}
import zio.test.environment.TestConsole
import zio.test.Assertion.equalTo

object TicTacToeSpec extends DefaultRunnableSpec(

   testM("prints to console") {
     for {
       test <- TestConsole.makeTest(TestConsole.DefaultData)
       _ <- TicTacToe.program.provide(new TestConsole {
         val console = test
       out  <- test.output
     } yield assert(out, equalTo(Vector("TicTacToe game!\n")))

In this example, we’re using the TestConsole implementation, which instead of interacting with the real console, stores the output in a vector, which we can access later and make assertions on. Available assertions can be found in the Assertion companion object. For more information on how to use test implementations, see the Testing effects doc.

Building the program bottom-up

One of the core design goals of ZIO is composability. It allows us to build simple programs solving smaller problems and combine them into larger programs. The so-called “bottom-up” approach is nothing new – it has been the backbone of many successful implementations in the aviation industry. It is simply cheaper, faster and easier to test and study small components in isolation and then, based on their well-known properties, assemble them into more complicated devices. The same applies to software engineering. When we start our application, we will land in MenuMode. Let’s define some possible commands for this mode:

package ioleo.tictactoe.domain

sealed trait MenuCommand

object MenuCommand {
  case object NewGame extends MenuCommand
  case object Resume  extends MenuCommand
  case object Quit extends MenuCommand
  case object Invalid extends MenuCommand

Next up, we will define our first module, MenuCommandParser which will be responsible for translating the user input into our domain model.

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.ZIO

import zio.macros.annotation.{accessible, mockable}

trait MenuCommandParser {
  val menuCommandParser: MenuCommandParser.Service[Any]

object MenuCommandParser {
  trait Service[R] {
    def parse(input: String): ZIO[R, Nothing, MenuCommand]

This follows the Module pattern which I explain in more detail on the Use module pattern page in ZIO docs. The  MenuCommandParser is the module, which is just a container for the  MenuCommandParser.Service .

Note: By convention we name the value holding the reference to the same service name as the module, only with first letter lowercased. This is to avoid name collisions when mixing multiple modules to create the environment.

The service is just an ordinary interface, defining the capabilities it provides.

Note: By convention we place the service inside the companion object of the module and name it  Service . This is to have a consistent naming scheme  <Module>.Service[R] across the entire application. It is also the structure required by some macros in the zio-macros project.

The capability is a ZIO effect defined by the service. For these may be ordinary functions, if you want all the benefits ZIO provides, these should all be ZIO effects. You may also have noticed I annotated the module with  @accessible and  @mockable . I will expand on that later. For now, all you need to know is that they generate some boilerplate code which will be useful for testing. Note that to use them we need to add the dependency in SBT build:

libraryDependencies ++= Seq(
  "dev.zio" %% "zio-macros-core" % "0.5.0",
  "dev.zio" %% "zio-macros-test" % "0.5.0"

Next, we can define our  Live implementation as follows:

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.UIO

trait MenuCommandParserLive extends MenuCommandParser {
  val menuCommandParser = new MenuCommandParser.Service[Any] {
    def parse(input: String): UIO[MenuCommand] = ???

Though the implementation seems trivial, we will follow Test Driven Development and first, declare the desired behavior in terms of a runnable specification.

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.test.{assertM, checkM, suite, testM, DefaultRunnableSpec, Gen}
import zio.test.Assertion.equalTo
import MenuCommandParserSpecUtils._

object MenuCommandParserSpec extends DefaultRunnableSpec(
            testM("`new game` returns NewGame command") {
              checkParse("new game", MenuCommand.NewGame)
          , testM("`resume` returns Resume command") {
              checkParse("resume", MenuCommand.Resume)
          , testM("`quit` returns Quit command") {
              checkParse("quit", MenuCommand.Quit)
          , testM("any other input returns Invalid command") {
              checkM(invalidCommandsGen) { input =>
                checkParse(input, MenuCommand.Invalid)

object MenuCommandParserSpecUtils {

  val validCommands =
    List("new game", "resume", "quit")

  val invalidCommandsGen =
    Gen.anyString.filter(str => !validCommands.contains(str))

  def checkParse(input: String, command: MenuCommand) = {
    val app = MenuCommandParser.>.parse(input)
    val env = new MenuCommandParserLive {}
    val result = app.provide(env)

    assertM(result, equalTo(command))

The  suite is just a named container for one or more tests. Each test must end with a single assertion, though assertions may be combined with  && and  || operators (boolean logic). The first three tests are straightforward input/output checks. The last test is more interesting. We’ve derived a custom invalid command generator from a predefined  Gen.anyString , and we’re using it to generate random inputs to prove that all other inputs will yield  MenuCommand.Invalid . This style is called Property-based testing and it boils down to generating and testing enough random samples from the domain to be confident that our implementation has the property of always yielding the desired result. This is useful when we can’t possibly cover the whole space of inputs with tests, as it is too big (possibly infinite) or too expensive computationally.

Access helper

In the test suite, we are referring directly to parse capability via the  MenuCommandParser.>.parse . This is possible thanks to the  @accessible macro we mentioned before. What it does underneath is to generate the helper object named  > placed within module‘s companion object with implementation delegating the calls on capabilities to the environment.

object > extends MenuCommandParser.Service[MenuCommandParser] {

  def parse(input: String) =

With our tests in place, we can go back and finish our implementation.

def parse(input: String): UIO[MenuCommand] =
  UIO.succeed(input) map {
    case "new game" => MenuCommand.NewGame
    case "resume"   => MenuCommand.Resume
    case "quit" => MenuCommand.Quit
    case _      => MenuCommand.Invalid

Lifting pure functions into the effect system

You will have noticed that parse represents the effect that wraps a pure function. There are some functional programmers who would not lift this function into the effect system, to keep a clear distinction between pure functions and effects in your codebase. However, this requires a very disciplined and highly skilled team and the benefits are debatable. While this function by itself does not need to be declared as effectful, by making it so we make it dead simple to mock out when testing other modules that collaborate with this one. It is also much easier to design applications incrementally, by building up smaller effects and combining them into larger ones as necessary, without the burden of isolating side effects. This will be particularly appealing to programmers used to an imperative programming style.

Combining modules into a larger application

In this same fashion, we can implement parsers and renderers for all modes. At this point, all of the basic stuff is handled and properly tested. We can use these as building blocks for higher-level modules. We will explore this by implementing the  Terminal module. This module handles all of the input/output operations. ZIO already provides the  Console module for this, but we’ve now got additional requirements. Firstly, we assume getting input from the console never fails, because, well if it does, we’re simply going to crash the application, and we don’t really want to have to deal with that. Secondly, we want to clear the console before outputting the next frame.

package ioleo.tictactoe.cli

import zio.ZIO
import zio.macros.annotation.{accessible, mockable}

trait Terminal {
  val terminal: Terminal.Service[Any]

object Terminal {
  trait Service[R] {
    val getUserInput: ZIO[R, Nothing, String]
    def display(frame: String): ZIO[R, Nothing, Unit]

However, we don’t want to reinvent the wheel. So we are going to reuse the built-in  Console service in our  TerminalLive implementation.

package ioleo.tictactoe.cli

import zio.console.Console

trait TerminalLive extends Terminal {

  val console: Console.Service[Any]

  final val terminal = new Terminal.Service[Any] {
    val getUserInput =

    def display(frame: String) =
      for {
        _ <- console.putStr(TerminalLive.ANSI_CLEARSCREEN)
        _ <- console.putStrLn(frame)
      } yield ()

object TerminalLive {
  val ANSI_CLEARSCREEN: String = "\u001b[H\u001b[2J"

We’ve defined the dependency by adding an abstract value of type  Console.Service[Any] , which the compiler will require us to provide when we construct the environment that uses the  TerminalLive implementation. Note that here again, we rely on convention, we’re expecting the service to be held in a variable named after the module. The implementation is dead simple, but the question is… how do we test this? We could use the  TestConsole and indirectly test the behavior, but this is brittle and does not express our intent very well in the specification. This is where the ZIO Mock framework comes in. The basic idea is to express our expectations for the collaborating service and finally build a mock implementation of this service, which will check at runtime that our assumptions hold true.

package ioleo.tictactoe.cli

import zio.Managed
import zio.test.{assertM, checkM, suite, testM, DefaultRunnableSpec, Gen}
import zio.test.Assertion.equalTo
import zio.test.mock.Expectation.value
import zio.test.mock.MockConsole
import TerminalSpecUtils._

object TerminalSpec extends DefaultRunnableSpec(
            testM("delegates to Console") {
              checkM(Gen.anyString) { input =>
                val app  = Terminal.>.getUserInput
                val mock = MockConsole.getStrLn returns value(input)
                val env  = makeEnv(mock)
                val result = app.provideManaged(env)
                assertM(result, equalTo(input))

object TerminalSpecUtils {
  def makeEnv(consoleEnv: Managed[Nothing, MockConsole]): Managed[Nothing, TerminalLive] =
    consoleEnv.map(c => new TerminalLive {
      val console = c.console

There is a lot going on behind the scenes here, so let’s break it down, bit by bit. The basic specification structure remains the same. We’re using the helper generated by the  @accessible macro to reference the  getUserInput capability. Next, we’re constructing an environment that we’ll use to run it. Since we’re testing the  TerminalLive implementation, we need to provide the  val console: Console.Service[Any] . To construct the mock implementation, we express our expectations using the  MockConsole capability tags. In this case, we have a single expectation that  MockConsole.getStrLn returns the predefined string. If we had multiple expectations, we could combine them using flatMap:

import zio.test.mock.Expectation.{unit, value}

val mock: Managed[Nothing, MockConsole] = (
  (MockConsole.getStrLn returns value("first")) *>
  (MockConsole.getStrLn returns value("second")) *>
  (MockConsole.putStrLn(equalTo("first & second")) returns unit)

To refer to a specific method we’re using capability tags, which are simple objects extending  zio.test.mock.Method[M, A, B] where M is the module the method belongs to, A is the type of input arguments and B the type of output value. If the method takes arguments, we have to pass an assertion. Next, we use the returns method and one of the helpers defined in zio.test.mock.Expectation to provide the mocked result. The monadic nature of Expectation allows you to sequence expectations and combine them into one, but the actual construction of mock implementation is handled by a conditional implicit conversion Expectation[M, E, A] => Managed[Nothing, M] , for which you need a Mockable[M] in scope. This is where the @mockable macro comes in handy. Without it you would have to write all of this boilerplate machinery:

import zio.test.mock.{Method, Mock, Mockable}

object MockConsole {

  // ...
  object putStr   extends Method[MockConsole, String, Unit]
  object putStrLn extends Method[MockConsole, String, Unit]
  object getStrLn extends Method[MockConsole, Unit, String]

  implicit val mockable: Mockable[MockConsole] = (mock: Mock) =>
    new MockConsole {
      val console = new Service[Any] {
        def putStr(line: String): UIO[Unit]   = mock(Service.putStr, line)
        def putStrLn(line: String): UIO[Unit] = mock(Service.putStrLn, line)
        val getStrLn: IO[IOException, String] = mock(Service.getStrLn)

The final program

You’ve learned how to create and test programs using ZIO and then compose them into larger programs. You’ve got all of your parts in place and it’s time to run the game. We’ve started with a simple program printing to the console. Now let’s modify it to run our program in a loop.

package ioleo.tictactoe

import ioleo.tictactoe.app.RunLoop
import ioleo.tictactoe.domain.{ConfirmAction, ConfirmMessage, MenuMessage, State}
import zio.{Managed, ZIO}
import zio.clock.Clock
import zio.duration._
import zio.test.{assertM, suite, testM, DefaultRunnableSpec}
import zio.test.Assertion.{equalTo, isRight, isSome, isUnit}
import zio.test.mock.Expectation.{failure, value}
import TicTacToeSpecUtils._

object TicTacToeSpec extends DefaultRunnableSpec(
            testM("repeats RunLoop.step until interrupted by Unit error") {
              val app  = TicTacToe.program
              val mock = (
                (RunLoop.step(equalTo(state0)) returns value(state1) *>
                (RunLoop.step(equalTo(state1)) returns value(state2) *>
                (RunLoop.step(equalTo(state2)) returns value(state3) *>
                (RunLoop.step(equalTo(state3)) returns failure(()))
              val result = app.either.provideManaged(mock).timeout(500.millis).provide(Clock.Live)
              assertM(result, isSome(isRight(isUnit)))

object TicTacToeSpecUtils {
  val state0 = State.default
  val state1 = State.Menu(None, MenuMessage.InvalidCommand)
  val state2 = State.Confirm(ConfirmAction.Quit, state0, state1, ConfirmMessage.Empty)
  val state3 = State.Confirm(ConfirmAction.Quit, state0, state1, ConfirmMessage.InvalidCommand)

And change the implementation to call our RunLoop service:

package ioleo.tictactoe

import ioleo.tictactoe.domain.State
import zio.{console, App, UIO, ZIO}

object TicTacToe extends App {
  val program = {
    def loop(state: State): ZIO[app.RunLoop, Nothing, Unit] =
          _         => UIO.unit
        , nextState => loop(nextState)


  def run(args: List[String]): ZIO[Environment, Nothing, Int] =
    for {
      env <- prepareEnvironment
      out <- program.provide(env).foldM(
          error => console.putStrLn(s"Execution failed with: $error") *> UIO.succeed(1)
        , _     => UIO.succeed(0)
    } yield out

  private val prepareEnvironment =
      new app.ControllerLive
        with app.RunLoopLive
        with cli.TerminalLive
        with logic.GameLogicLive
        with logic.OpponentAiLive
        with mode.ConfirmModeLive
        with mode.GameModeLive
        with mode.MenuModeLive
        with parser.ConfirmCommandParserLive
        with parser.GameCommandParserLive
        with parser.MenuCommandParserLive
        with view.ConfirmViewLive
        with view.GameViewLive
        with view.MenuViewLive
        with zio.console.Console.Live
        with zio.random.Random.Live {}

I’ve skipped the details of many services, you can look up the finished code in the ioleo/zio-by-example repository. We don’t have to explicitly state the full environment type for our program. It only requires the  RunLoop , but as soon as we provide  RunLoopLive , the compiler will require that we provide  Terminal and  Controller services. When we provide the Live implementations of those, they, in turn, add further dependencies of their own. This way we build our final environment incrementally with the generous help of the Scala compiler, which will output readable and accurate errors if we forget to provide any required service.


In this blog entry, we’ve looked at how to build a modular command-line application using ZIO. We’ve also covered basic testing using the ZIO Test framework and mocking framework. However, this is just the tip of the iceberg. ZIO is much more powerful and we have not yet touched the powerful utilities for the asynchronous and concurrent programming it provides. To run the TicTacToe game, clone the ioleo/zio-by-example repository and run  sbt tictactoe/run . Have fun!

Check out more articles about ZIO on our blog:

Scale fast with Scalac – Scala development company ready to solve all your challenges.

How to approach problematic areas using simulators and more

It’s no secret that mobile devices are taking over the world. Today, it’s possible to complete basically any operation, anywhere, within a few seconds – all thanks to smartphones. This market is constantly growing, and slowly outrunning laptops, not to mention other stationary devices.

With the growing number of mobile devices on the market, the problem of how to keep up with the needs of users and provide them with high-quality software is also increasing. To meet these demands, we need a specific approach. That’s why testing for mobile apps is a completely different topic than web application testing.

To automate or not to automate? That is the question!

I think testing mobile applications is a good candidate for automation. It often allows you to provide high test coverage. However, it can be time-consuming and not very profitable due to the specifics of a project.

For over a year, together with my QA team, we tested specific, short advertisements – each of them unique. There were at least 5 of them a day. The space for automation was also limited and covered only basic elements such as detecting html5_interaction or playback of subsequent video elements and detecting whether the game ‘install’ button was clicked.

Test coverage

Our tests covered a wide range of devices: iPhones, the entire range of Android phones, and Amazon devices.

The test coverage included:

Division by system:

  • Android from 5.1.1 to the newest ( it’s now 10.0)
  • iOS from 9.3.5 to the newest ( it’s now 13)

Division by type of devices:

  • Amazon (in this case one device was enough, e.g. Fire HD 8 Tablet)
  • Low-spec devices – e.g. iPhone 5s or Samsung J3
  • High-spec devices – e.g. iPhone 8+ or Samsung Galaxy J7
  • Wide aspect ratios – e.g. Samsung Galaxy S8+ or Google Pixel 2
  • Old iPad and New iPad – e.g. iPad 2 Air old and the new generation
  • Android Tablet – e.g. Samsung Tab S3
  • iPhone X’s – this type of device generates a lot of visual issues so had a separate test case as a type of device

What tests were carried out and what were the most common problem areas?

Testing for mobile devices is not the same as testing desktop applications, not only regarding the number of devices but also the methods of testing them and the focus areas.

Testing for mobile apps problem 1: Scaling

We focused on scaling and loading the ad. When the company logo or inscriptions were covered, the issue had high priority. The phone’s notch, for example on the iPhone X or wide aspect ratio devices such as Samsung Galaxy s8 +, was a big problem (e.g. the notch covering half of the name of the advertised place).

Testing for mobile apps problem 2: iPads

Tests on iPads generated a lot of errors, due to the fact that they have the ability to rotate 360 ​​degrees. For this reason, there were often problems with some missing parts of the images that were not fully displayed, lack of screen adjustment. This sometimes even resulted in the video stopping or the entire advertisement jamming. The problems were so frequent that iPad fixes on the dev side “ruined” the functionalities of other devices or were simply not feasible. After taking all of the conditions under consideration, especially the challenging time frames for our tests, we decided to lower the priority of iPad fixes.

Testing for mobile apps problem 3: Functional side

Functional tests were performed in various combinations. The most problematic area, it turned out, was “the background issue”. Going back to the app after putting it in the background made some of the mechanisms in the ad fail. Another thing was that functions were failing to shut down after switching them to the background – for example, the music from the video remained on. This was the most common thing that happened with the videos.

Testing for mobile apps problem 4: Open store

Going to the store or opening dedicated links were also very important. It was quite a challenge to check the ad specific availability in a given country because when an item is not available in your country, the Android (Google Play) store will simply display this information. However, it’s not as easy with the App Store. In Apple’s case, you will receive a blank page and no information about what’s happened. Which obviously is not what we want for our users to experience.

Testing for mobile apps problem 5: Performance

Performance tests were carried out, using one of the tools for testing mobile apps that I recommend – Charles Proxy – something which I will elaborate on later in the article. It helped to simulate the slowdown of the internet up to 512 kbps, but we most often used 3G which was enough to induce the performance problems we were analyzing.

Tools for testing mobile apps: Charles Proxy

So what is Charles Proxy? According to their website

“Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).

For me, Charles Proxy helps to monitor requests or to exchange the body of the request. For example, Xcode only has the simulator for iOS 9.3 but our tests had to be performed on iOS 9.3.5. So I had to rewrite the rule but to do that all I needed was to configure the file of the app and simply change the value in the body of the request

Note: To use simulators for iOS 9 you must have Xcode installed below version 11.1 because already in this version support system version for simulators is from 10.3.1 +

Charles Proxy: How to set it up?

To set up Charles Proxy in a way so it can read traffic between machines, all you need are a few, short steps:

  1. To Download Charles Proxy you need to go to: https://www.charlesproxy.com/ If you want to try it out first, there’s a free trial for 30 days.
  2. After installing and opening the app click on the tab Help -> SSL Proxying -> Instal Charles Certificate on a Mobile Device or Remote Browser
Testing for mobile apps - Charles Proxy - Certificate

Then we have information about the name and port of the proxy server, there will also be some info about the installation of the certificate from the site chls.pro/ssl on your phone.

Where should you fill this data? It depends on the OS:

Android: Settings -> WiFi -> Manage network settings -> Show advanced options -> Proxy -> Manual -> Enter the Server Name and Port and click Save (example on Samsung Galaxy J3)

iOS: Settings -> Wifi (hold the given network) -> Configure Proxy -> Manually -> Enter Server and Port -> Click “Save” (iPhone 5s example) After entering and saving, go to chls.pro/ssl to download the certificate.

3. The last thing you need to do is enable our SSL proxying to the wild card

Now you can see your traffic. Also, the one thing I mentioned earlier was, namely: slowdown the internet to check the performance of our ad or app – you can find this feature on Throttle Settings in the Proxy tab.


Note: For Android 7+ you need to add the XML file (or ask the developer for it) to your application along with the configuration file that allows you to monitor the connection. You can also find more information on how to do this in the documentation: https://www.charlesproxy.com/documentation/using-charles/ssl-certificates/

The basic test coverage for mobile devices

Buying every possible device to test the hell out of every application is quite expensive and requires you to always be up to date with new devices. If you have a tight budget and you’re not so great with time either, you have to consider which devices and systems are the most important for the tests. To decide the priorities, think about what systems are the most popular and then simply – test them. In the case of iOS, most users update to the latest version. Over time, Apple ceases the support of old versions of applications. Interesting fact: at this point on iOS 10.0.2 (e.g. iPhone 6s) there is no application that would allow us to record the screen.

In the table below you can see the usage of all iOS OS version:

Testing for mobile apps Adoptation trends
Source https://david-smith.org/iosversionstats/

As far as Android is concerned, it is not as obvious as with the OS versions. There are still devices with Android 3 or 4 that people still use on a daily basis. On the plus side, versions of Android usually aren’t too different from each other. When there’s a bug it’s rarely found only in one version, usually, it occurs on most of the other systems too.

When we can’t use physical devices for our tests, we can use tools for testing mobile apps such as iOS Simulator or Android Emulator. From my experience, Apple’s Xcode Simulator is a very useful tool. In contrast to Apple, testing Android apps this way is way harder, and I would opt for using physical devices whenever possible. Why? I’ll explain in a second.


Now, let me tell you about the simulators and emulators that are available with Xcode for iOS and Android Studio for Android.

Simulator for iOS

IOS simulators have many options and can really reflect the real devices. Often, the bugs you can find on the simulator match the ones on the physical device. The simulator for IOS, just like a physical device, includes a silent button, sound buttons, locks, and home button. We have many types of devices to choose from and almost every OS version that has ever been released is available.

Tip: To use Charles Proxy for reading traffic with our iOS simulator we need to enable ‘macOS Proxy’ and Install Certificate on iOS simulator.

So if you don’t want to invest a lot of money in buying all types of devices, Xcode simulators will cover most of the basic tests.

Below there is an example of options in a simulator and other types of devices supported in Xcode:

Android emulator

Unfortunately, Emulators in the Android Studio generate a lot more issues than the iOS Simulator. In my experience, a lot of issues found on emulators do not occur on a physical device, so in the case of Android, it is better to buy a device or use device farms.

If you want to try it for yourself – to set up a new emulator you need to choose a new hardware type and OS version.

In the Android emulator, there are a lot of options to use, e.g.: the health of the battery, location, and type of network. It never harms to try all these out and see how they work for you. And if you don’t fall in love with it, just like I didn’t, check out Best device farms for ios and android


When it comes to testing whole systems, or even a few applications, which for financial reasons and time frames are not profitable to automate, you have to consider what physical phone resources you have, what your base test coverage may be and what tests the crucial ones for the application are. When you figure these things out, testing will be automatically much more effective and cost-effective.

Thanks for reading the whole article! I hope it provided a helpful dose of knowledge and helps you to find your feet in an era of the growing popularity of mobile devices and their testing.

Check out some other articles on testing on our Blog:

Introduction to OSI model and TCP/IP for Testers

Most applications out there run on the HTTP protocol, so having a solid understanding of this protocol will make your testing work much more manageable. We explored this in a previous post: What is HTTP protocol – introduction to HTTP for Testers. But there’s more to networks than just HTTP. In this post, we are going to dive deeper into networks by exploring the OSI model.

My main goal in this article is to show you the OSI model and explain how data flows in a network. Then I will go through the differences between the OSI model and TCP/IP. At the end of the article, I will also mention a few protocols used in networks.

But before we get into the details, I should explain some basic terminology.


LAN (Local Area Network) and WLAN (Wireless Local Area Network) 

Networking basics LAN WLAN

LAN is a local network that consists of a group of computers and devices connected via a single physical network (cables). It is limited to a specific geographic area/location.

An excellent example of this kind of network would be a library, office, or home. I don’t think most of us use a LAN in our homes these days, because a LAN connects devices via cables.  Nowadays, our devices are connected wirelessly via WIFI, so we’re talking about WLAN.

WAN (Wide Area Network)

WAN combines numerous sites and covers large geographic regions (connecting physically distant locations). The best example of this is the internet itself – that is, thousands of local networks (LAN / WLAN) connected. 

Another example would be connecting three company offices in different cities. Each office has its LAN. By combining them, we could create the company’s own internal network – WAN.

Networking basics WAN

Differences between IP and MAC address

You have probably already heard of and know something about what an IP is. However, you may not have met the concept of a MAC address. So, let me explain in a few words what an IP is, and then a MAC address, to illustrate the key differences between them.

IP (internet protocol) 

We use IP for communication between different networks (to address and transport data from one network to another). It performs the role of routing, i.e., searches for the fastest route to pass a data packet. An IP address is a logical address – this means that it is allocated depending on which network the device has been connected to. If a device is in two networks, it will have two IP addresses.

MAC address (Media Access Control)

MAC is a physical address with a unique identifier burned out on the network card. It identifies specific devices and is assigned by the manufacturer. MAC addresses are used for communication within one network, e.g., in a home network, if you want to connect a computer to a printer or other devices, it will use MAC addresses to do that.

Key differences to remember



Logical address

Physical address

Identifies connection with a device in the network

Identifies device in the network

Assigned by the network administrator or ISP (internet service provider)

Assigned by the manufacturer

Used in WAN communication

Used in LAN/WLAN communication

OSI model

The OSI model has never been directly implemented as it’s mostly a reference architecture on how data should flow from one application to another through a network. TCP/IP is used, and these days it’s the most popular. After the OSI model, I will say more about TCP/IP. But it’s good to start with the OSI because it’s easier to understand some of the concepts. 

Networking basics OSI model

The OSI model consists of 7 layers divided into two groups:

  • Host layers (happening on the computer side. Responsible for accurate data delivery between devices)
  • Media layers (happening on the network side. Responsible for making sure that the data has arrived at its destination)

7. Application layer

In this layer, the user directly interacts with applications. Here is decided which interfaces are used to interact with the network through the corresponding protocols in this layer. 

Examples of such applications are chrome or Gmail:

  • Chrome uses the HTTP / HTTPS protocol
  • Gmail uses email protocols like SMTP, IMAP.

The applications themselves are not in the application layer – in this layer, there are only the protocols or services that the applications use.

6. Presentation layer

The task of this layer is proper data representation, compression/decompression, encryption/decryption. This ensures that the data sent from the X system application layer can be read by the Y system application layer.

5. Session layer

This layer is responsible for creating, managing, and then closing sessions between two applications that want to communicate with each other. 

4. Transport layer

The task of this layer is to make sure that the data has arrived safely from the sender to the recipient. When it sends data, it breaks it into segments. When it accepts data, it puts it back into a stream of data.

Networking basics Transport Layer

In this layer  two protocols are used: TCP and UDP (later on in the article I’ll be saying more about these)

3. Network layer

Provides addressing and routing services. It defines which routes connect individual computers and decides how much information to send using one connection or another. Data transferred through this layer are called packets.

Places two addresses in the packet sent:

  • Source address
  • Destination address

This layer is based on IP (internet protocol).

2. Data-link layer

This layer deals with packing data into frames and sending them to the physical layer. It also oversees the quality of the information provided by the physical layer. It recognizes errors related to losing packages and damaging frames and deals with their repair. 

1. Physical layer

This is the physical aspect of the network. This applies to cables, network cards, WIFI, etc. It is only used to send logical zeros and ones (bits). It determines how fast the data flows. When this layer receives frames from the data link layer, it changes them to a bitstream.

Encapsulation and decapsulation of data

Encapsulation and decapsulation of data

Encapsulation adds pieces of information to data sent over the network. This occurs when we send data. At each layer, some information is added to our data. We combine the address of the sender and recipient, the encryption method, data format, how the data will be divided, sent, etc.

Decapsulation occurs when we receive information. It consists of removing pieces of information collected from the network. At each layer, some info disappears. In the end, the user only gets what he should receive without the IP, MAC address, etc.

Differences between the OSI model and TCP/IP

The TCP/IP model has a similar organization of layers to the OSI model. However, TCP/IP is not as rigorously divided and better reflects the actual structure of the Internet.

Networking basics Differences between OSI model and TCP:IP

In TCP/IP, there are only four layers:

  • Application layer
  • Transport layer
  • Internet layer
  • Network interface layer

The OSI model makes a clear distinction between layers and some concepts. In TCP/IP, it is harder to make this clear distinction and explain these concepts. Now you can see why I introduced to you the OSI model before the TCP/IP.

The TCP/IP application layer contains three layers from the OSI model:

  • Application layer
  • Presentation layer
  • Session layer 

The working of the application layer in the TCP/IP is a combination of these three layers from the OSI model. In this layer, we have various protocols such as HTTP, DNS, SMTP, FTP. 

The transport and internet layers in TCP/IP work, as I described in the OSI model. But in the next section, I will be revealing more details on how the transport layer protocols (TCP and UDP) work.

The network interface layer in TCP/IP is a combination of two layers form the OSI model (data link and physical layer). I’m not going to go into the details of this layer. But in the OSI model, I described the critical functions of these two last layers. Here in TCP/IP, these functions are realized in one layer.

Protocols in the TCP/IP model

Internet layer protocols

ARP (Address Resolution Protocol)

Used to identify the MAC address. If the device knows the IP address of the target device, then ARP sends a request to all of the devices in the LAN to search for the MAC address of the device with the given IP. Then the device with that IP sends an ARP response with its MAC address. 

This information will be saved in the ARP table. In windows or macOS, open terminal and type arp -a. Then you should see the ARP table.

In the image below, you can see how this process works when an ARP request matches the IP of the device.

Networking basics

The RARP protocol performs the reverse operation.

IP (Internet protocol)

I explained at the beginning of this article what IP is. But I want to make clear that the IP in the TCP/IP model is in the internet layer. It is also good to add that IP has two versions.

  • IPv4
  • IPv6

The second one has been introduced because IPv4 addresses are ending. IPv6 is more efficient, has better routing, and is safer. 

ICMP (Internet Control Message Protocol)

This acts as a tool for solving problems. The ICMP reports any communication errors between hosts. ICMP messages can help to diagnose a problem. For example, if the router or host is overloaded, ICMP can send a message to slow down the transfer rate.

ICMP is used in the ping program, which allows the diagnosis of network connections. Ping lets you check if there is a connection between the hosts. It also allows you to measure the number of packets lost and delays in their transmission.

In the terminal, type ping www.scalac.io. After ping, you need to provide the host. You can choose any website. I’m going to check my connection with the scalac site. To exit ping, use CTRL + C.

Ping sends an ICMP packet to the host provided. In my case, I sent 17 packets and received back 17 packets. In this short connection, I didn’t lose any packets. The program also counts the time gap between sending and receiving packets. In the end, the program summarizes the connection and shows us the minimal/ average / maximum time gap between sending and receiving packets.

Transport layer protocols

TCP (Transmission Control Protocol)

TCP is a highly reliable and connection-oriented protocol. It applies the 3-way handshake principle. Before it sends any data, it will first establish a connection.

Networking basics - Transmission Control Protocol

This rule consists of three steps, made to establish a connection.

  1. SYN – The device sends a message to the server, “I want to connect with you.”
  2. SYN / ACK – When the server receives the message, it will reply that it is ready for communication.
  3. ACK – The device sends confirmation of receiving the response from the server and that it is ready for communication.

The high reliability of TCP is due to the device, making sure that the data sent has been received by the server. Then the server makes sure that the data sent to you have been collected by you. If the server sends 10 data packets, and for some reason, you do not receive one of them, and you do not confirm the receipt, this server will try to send the lost package again. 

TCP also provides data delivery in order. Each sent packet is numbered. Although packets may still arrive out of order, TCP will arrange them in order before sending them to the application.

To summarize the advantages of TCP:

  • Set up a connection before sending any data
  • Data delivery acknowledgment
  • Retransmission of lost data
  • Deliver data in order

UDP (User Datagram Protocol)

UDP sends data and doesn’t care if the device has received it or not. It also doesn’t care if some packets are lost. But the significant advantage of the User Datagram Protocol is that the packet sizes are smaller than TCP (about 60% lighter). 

Networking basics connection

UDP is an economical version of TCP. 

  • Connectionless and unreliable.
  • No data retransmission
  • No data delivery acknowledgment
  • Data can arrive out of order

You may ask the question, then why use UDP? It’s such an unreliable protocol!

In some cases, UDP is better because TCP has significant overheads (data retransmission, delivery acknowledgment, etc.) UDP is often used to transmit data in real-time: video streaming or audio such as Skype calls.

Application layer protocols

Network management protocols

DNS (Domain Name Services) – Changes the domain name to an IP address. The domain name is used because it’s human-friendly. It’s easier to remember a domain name (www.google.com) than an IP address ( When you type any website address into a browser, then the browser sends a request to the DNS for the IP address of that domain.

Networking basics management protocols

If you type into a browser IP, then you should see the Google page because this is Google’s IP address. I can get it directly by requesting the DNS in the terminal. Type in terminal: nslookup www.google.com. 

NTP (Network Time Protocol) – This is an uncomplicated and straightforward protocol. It is used for automatic time synchronization in devices connected to a network. Imagine now manually synchronizing time for 10 or 50 devices. This would be ineffective.

Some devices, procedures, or safety mechanisms require accurate time synchronization for proper operation. Also, thanks to the NTP, finding the causes of any network or device errors is easier. Because using the logs, we will be able to find out what the order of events was that caused the failures.

SNMP (Simple Network Management Protocol) – This is used for monitoring, management of updates, and diagnostics of networks and network devices.

Remote authentication protocols

SSH (Secure Shell) – This allows you to remotely log in to the terminal in network devices and administer them (e.,g. router, firewall, a remote server). SSH is secure because communication is encrypted. SSH uses the TCP protocol.

File transfer protocols

FTP (File Transfer Protocol) –  The purpose of this protocol is to display a list of files/folders, adding, deleting, or downloading them from the server. A good example is sending website files to a server. To do this, you need to use an FTP client with which you can authenticate yourself and get access to the FTP server. A popular FTP client is FileZilla. FTP uses TCP. 

A significant flaw of FTP is the lack of data encryption. Therefore, to ensure secure authentication and transfer of files, it is worth using FTPs (FTP Secure and FTP-SSL) or SFTP (SSH File Transfer Protocol). They work in the same way as FTP but extend its functionality by encrypting the transmitted data.

Email protocols

SMTP (Simple Mail Transfer Protocol) and IMAP (Internet Access Message Protocol) are two protocols used in sending and receiving emails. SMTP’s task is to send email messages from a client to an email server or between email servers. IMAP is used to manage and retrieve email messages from an email server.


This image shows an example when a sender (hubert@gmail.com) and a recipient (jacek@wp.pl) have different email service providers.

  1. In the beginning, the email message is sent to the sender’s email server (Gmail)
  2. Then the Gmail email server sends an email message to the recipient’s email server  (WP)
  3. Finally,  IMAP retrieves the email message from the wp email server to our client.

When the sender and recipient have the same email service provider (Gmail), step 2 will be skipped.

Browser protocols

HTTP/HTTPS – I have written a separate article on HTTP. You can read it here: What is HTTP protocol – introduction to HTTP for Testers. I explain there exactly how HTTP works. HTTPS extends HTTP functionality with data encryption protocols.

VoIP protocols (Voice over IP)

SIP (Session Initiation Protocol) – This performs an administrative function (using TCP). It is used only to set and close an audio or video connection.

RTP (Real-Time Transport Protocol) – This is used to transfer data during audio or video calls (using UDP).

For example, let’s say you want to call someone on Skype. SIP will be used to establish the connection. When the connection is established, the RTP springs into action and transmits the data. When you end the conversation, SIP will close the connection.


You have come to know many new concepts today. You now know how data flows in networks. They go through a rather complicated process. All of the topics I have touched are so extensive; they could easily have a separate article for themselves. However, I have tried to present them to you at a fairly general level, easy to understand. Without going too deeply into the more technical aspects.

If you think I have managed to explain things understandably and interestingly, please share this article on social media. And if you have any questions, also feel free to ask them in the comments below.

How to upgrade Angular (JS) to Angular 7?

You must have been asleep (for a few years at least) because Angular 7.x is already here and you’re still stuck on AngularJS. Or maybe it’s just that your codebase is so damn large you can’t face wasting years of your life rewriting it all at once? 

Fear not! It’s not that bad. You can (and should!) upgrade to Angular 7 step by step, or you can do it while leaving your old code in place, working with the latest Angular shipping new features.

But how? Simple. By choosing the Hybrid Application path. This means we will be able to expand our old app with the new Angular, running both frameworks at the same time.

In my opinion, the best way to understand any tool is to in fact dig into the code itself.  So I have prepared a small (really the bare minimum!) repository to use as an Angular upgrade example to show off the process of upgrading.


Well, actually there aren’t that many prerequisites that your project must follow to be able to upgrade.  All the in-depth details are, as always, in Angular Docs.

Also, these are the things you can do if you like but you are not really obliged to do; some are just for helping you get through the upgrading process (like sticking to the style guide).

To name a few:

  1. Following the AngularJS Style Guide 
  2. Using a Module Bundler: but who doesn’t these days?
  3. TypeScript: this is a must, but you probably already know that
  4. Component Directives: just a good step forward you should already be using if you’re on AngularJS >= 1.5

UpgradeModule to the rescue

Have I mentioned the hybrid app already? Because this seems like a good moment to put in a few more words on it.

The Angular Hybrid app is just like any other, but it comes packed with two versions of Angular; the good ol AngularJS and one of the latest/newest Angular versions (you’re probably reading this with somewhere around Angular 7 / 8 out there, or maybe even 9? 10…?).

Angular comes with a handy UpgradeModule which will help us bootstrap the hybrid application.

The next part of this post will cover the tools inside this module to help you bootstrap, upgrade/downgrade the components.


While we usually/sometimes bootstrap AngularJS app automagically from HTML, like this:

<!doctype html>
<html ng-app="app">
  <meta charset="UTF-8">
  <title>ng Hybrid App</title>

working with the hybrid application requires bootstrapping manually two modules, one for each ng version (more on this below). So, it’s necessary to replace the auto bootstrap with manual in your ng1.x app:

import * as angular from 'angular';

import { AppComponent } from './components/app/app.component';
import { WhatAmIComponent } from './components/whatAmI/whatAmI.component';

  .component(AppComponent.selector, new AppComponent())
  .component(WhatAmIComponent.selector, new WhatAmIComponent());

  .bootstrap(document, ['app']);

Who doesn’t like more npm dependencies?

First of all, let’s introduce some Angular dependencies into our project, namely:

  • @angular/core: 
  • @angular/common: 
  • @angular/compiler: 
  • @angular/platform-browser: 
  • @angular/platform-browser-dynamic: 
  • @angular/upgrade: 
  • rxjs

These aren’t all Angular comes with, but they’re enough to do an upgrade. You’ll probably install some more while working with the code e.g. @angular/router, @angular/forms and so on.

Besides Angular itself, we’ll need some polyfills:

  • core-js
  • zone.js

You’ll find the current state of the package.json here: https://github.com/kamil-maslowski/ng-hybrid-app/blob/upgrade-packages/package.json.

  "name": "hybrid-app",
  "version": "0.0.1",
  "description": "",
  "scripts": {
    "build": "webpack --config webpack.config.js"
  "author": "Kamil Maslowski",
  "license": "ISC",
  "dependencies": {
    "@angular/common": "^7.2.12",
    "@angular/compiler": "^7.2.12",
    "@angular/core": "^7.2.12",
    "@angular/platform-browser": "^7.2.12",
    "@angular/platform-browser-dynamic": "^7.2.12",
    "@angular/upgrade": "^7.2.12",
    "angular": "^1.7.8",
    "core-js": "^3.0.0",
    "rxjs": "^6.4.0",
    "zone.js": "^0.8.29"
  "devDependencies": {
    "@types/angular": "^1.6.54",
    "html-webpack-plugin": "^3.2.0",
    "ts-loader": "^5.3.3",
    "typescript": "^3.4.1",
    "webpack": "^4.29.6",
    "webpack-cli": "^3.3.0"

The order of things to upgrade Angular

AJS module

Now we need to make some changes in the project structure. Our old place-where-it-all-started was here, let’s rename it as ajs.module.ts. 

We also need to remove the line:

  .bootstrap(document, ['app']);

Why do we do this? As I’ve already mentioned, we need two modules, AJS will act as our AngularJS module, and the bootstrapping will be handled by the Angular UpgradeModule tools.

App module

Our app.module:

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { UpgradeModule } from '@angular/upgrade/static';

  imports: [
export class AppModule {
  constructor(private upgrade: UpgradeModule) { }
  ngDoBootstrap() {
    this.upgrade.bootstrap(document.documentElement, ['app']);

namely the place where we will bootstrap the Angular app (now along with AngularJS) is really simple. We just need to import the bare minimum and export a module, overriding the ngDoBootstrap.

Index.ts reinvented

So the main entry point of our app, index.ts, now looks like this

import 'core-js/proposals/reflect-metadata';
import 'zone.js';

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';

import './ajs.module';
import { AppModule } from './app.module';


First, we’ll import some polyfills Angular will need to support all the latest mojos not yet in some of the browsers.  For overall info on that check out the Angular Browser Docs

Then platform-browser-dynamic, already covered above, our AngularJS and Angular modules, and well – that’s all we need here.

The last step now is just to bootstrap the app!


Hybrid app ready to run!

Now we already have a working, hybrid angular application. But no need to take my word for it.  Just run:

{ "build": "webpack --config webpack.config.js" }

and check out yourself the dist folder. So why almost? Because I’d like to also show you how to take your first baby-steps in your fancy new hybrid app. We’re going to write an Angular component, and downgrade it to make it usable inside the AngularJS code. Let’s do it!

First steps in your hybrid app

Let’s create a new component under the src/components. We’ll call it whatIdLikeToBe.component.ts. We’ll create this component in Angular style, exporting the component class annotated with Component from the @angular/core package. As it’s a template (remember we can also use templateUrl instead of the inline template!) we just have to put in a simple string.

You’ll find the source below

import { Component } from '@angular/core';

  selector: 'what-id-like-to-be',
  template: `
    <h2>yay! I'm an ng6 component!</h2>
export default class WhatIdLikeToBeComponent { }

What next? Next, we want to be able to use our brand new Angular component in our Angular Hybrid App. To be able to do that we have to tell Angular(JS) that we have some component that it needs to downgrade to 1.x format, so…

import * as angular from 'angular';

import { AppComponent } from './components/app/app.component';
import { WhatAmIComponent } from './components/whatAmI/whatAmI.component';
import { downgradeComponent } from '@angular/upgrade/static';
import WhatIdLikeToBeComponent from './components/whatIdLikeToBe/whatIdLikeToBe.component';

  .component(AppComponent.selector, new AppComponent())
  .component(WhatAmIComponent.selector, new WhatAmIComponent())
    downgradeComponent({ component: WhatIdLikeToBeComponent })

See what we did there? We downgraded our component to the AngularJS directive specifying a directive name (spot the camelCase here, it’s easy to mix up where to use camelCase vs kebab-case). And that’s all we need! Our Angular component can now be used anywhere in the AngularJS codebase, like this:

export class AppComponent implements ng.IComponentController, ng.IComponentOptions {
  static selector = 'app';

  controller: ng.Injectable = AppComponent;
  template: string = `
    <h1>Upgrade me!</h1>

Once more, spot how we specified the camelCase directive name vs how we use it with kebab-case here!

Conclusions on Angular upgrade

Reading the intro you were probably thinking:  “Wait a moment! How to upgrade to Angular 7? But we’re already on v. 8/9/2802 ”. And yes, you’d be completely right, which shows how often Angular is updated (in fact there’s a major new version every six months). So there’s no time to lose to start upgrading!

You may, of course, be happy with the current source code that backs your projects, but as time passes, and literally everyday tech stacks develop more and more, it’s worth checking out what’s out there. Not only for the sake of developers, but also for the sake of your end-users, whose user experience will be improved for numerous reasons, not least because of the performance boosts you can achieve and all the mojos made available to you by the Angular team.

Useful links

Queueing and messaging platforms have been gaining in popularity in recent years. They solve numerous problems based on asynchronous message passing or consumer and producer patterns. In this blog post, we’re going to build a basic message broker functionality with ZIO for our internal clinic messaging system, specifically with ZIO Queues and ZIO Fibers.

In our clinic, we have x-ray rooms which produce x-ray photographs of hips and knees, which are sent via a messaging system. For any given body part, some physicians can perform a photographic analysis. Additionally, we want to be able to perform message logging for selected body parts.

This example accurately describes a message broker with topics: sending messages to defined topics, subscribing to them in two ways – the ‘one message one consumer’ type pattern and the multicast type pattern. We will be performing this subscribing via consumer groups to which consumers subscribe within any particular topic.

Each topic’s message is delivered to every consumer group (like multicast), but within each group, only one consumer can digest the message (like producers and consumers). Here’s an image showing this:

ZIO Fibers ZIO Queues Message broker

Of course, there are plenty of distributed platforms that can achieve this, e.g. RabbitMQ provides us with a so-called exchange – a broker between a producer and queues that decides which queues to send the message to. Broadcast is supplied via a funout exchange, as opposed to direct and topic exchange types which require a match to the message’s topic.

So let’s try to implement this concept one more time, but this time with ZIO Queues and ZIO Fibers in an effectful way.

ZIO Queues & ZIO Fibers

But first things first – let’s briefly introduce Fibers and Queues in ZIO.

So Fibers are data types for expressing concurrent computations. Fibers are loosely related to threads – a single Fiber can be executed on multiple threads by shifting between them – all with full resource safety!

What makes Fibers stronger is the seamless setting in ZIO. Having some effect e.g. UIO("work") we only need to call .fork on it to make it run on Fiber. Then it’s up to us what to do next: interruptstop Fiber by force, join – block current Fiber until it returns the result or races with another Fiber – runs two ZIO Fibers and returns the first that succeeded.

I should mention that the underlying implementation of race is done via raceWith – a powerful method that allows you to provide any logic for managing two separate Fibers. raceWith is used not only in race but also zipPar – for running two Fibers in parallel and returning both results as a tuple.

On the other hand, Queues in ZIO addresses issues that we can encounter while using BlockingQueue. The effectful, back-pressured ZIO Queue makes it easy to avoid blocked threads on Queues core operations such as offer and take.

Apart from a bounded back-pressured queue, ZIO Queues delivers other overflow behaviors such as sliding – for removing the last inserted element, or dropping – for discarding the newly received elements. All this in a non-blocking manner.

So the moment we use queue.offer(sth).fork on a filled back-pressured queue, we are sure that running a separate fiber will make it non-blocking for the main one. Other ZIO Queue assets are interruption (as fibers are) and safe shutdown.


We’ll start with defining our domain and request class with a topic field.

Additionally, we will implement RequestGenerator for generating Requests:

sealed trait Diagnostic

case object HipDiagnostic extends Diagnostic

case object KneeDiagnostic extends Diagnostic

case class Request[A](topic: Diagnostic, XRayImage: A)

trait RequestGenerator[R, A] {
  def generate(topic: Diagnostic): URIO[R, Request[A]]

Imports required by our project:

import zio._
import zio.random._
import zio.console._
import zio.duration._

For the sake of simplicity let’s assume our x-ray images are simply Ints:

case class IntRequestGenerator() extends RequestGenerator[Random, Int] {
  override def generate(topic: Diagnostic): URIO[Random, Request[Int]] =
    nextIntBounded(1000) >>= (n => UIO(Request(topic, n)))

Before getting started with the first part, let’s take a look at the architecture diagram. It might look strange at first so let’s leave it this way for now:

`ZIO Fibers ZIO Queues architecture diagram


The first component of our system is a Consumer[A]. Here we are providing two API methods – create for constructing a consumer wrapped in UIO and run that starts a new fiber that continuously waits for elements in its queue to process. The processing is rather dull but following console logs are definitely not!

It’s worth stressing that run returns (Queue, Fiber) in effect so apart from connecting the consumer to the system we can also interrupt or join the customer:

case class Consumer[A](title: String) {
  def run = for {
    queue <- Queue.bounded[A](10)
    loop = for {
      img  <- queue.take
      _    <- putStrLn(s"[$title] worker: Starting analyzing task $img")
      rand <- nextIntBounded(4)
      _    <- ZIO.sleep(rand.seconds)
      _    <- putStrLn(s"[$title] worker: Finished task $img")
    } yield ()
    fiber <- loop.forever.fork
  } yield (queue, fiber)

object Consumer {
  def create[A](title: String) = UIO(Consumer[A](title))

As we are more used to an imperative approach, let's focus for a moment on the advantages of using ZIO effects here.

Any potentially dangerous side effects here are kept inside the ZIO monad. This makes a unit println method more substantial and, referentially transparent. Also, having a physical grasp on everything is really beneficial when it comes to parallelism.

Here, we were able to build an arbitrary chain of computations and make it run forever on a separate ZIO Fiber with a pleasing .forever.fork.

Topic Queue

TopicQueue is kind of the most complicated part. It's in charge of the proper distribution of messages among subscribers. The subscribe method receives a subscriber's queue and the consumerGroup number. As you will no doubt recall, each message is passed to each consumerGroup and then to a random subscriber within each group. The run method follows the pattern from previous components - a continuous loop of acquiring messages and distributing them within the described scheme:

case class TopicQueue[A](queue: Queue[A], subscribers: Ref[Map[Int, List[Queue[A]]]]) {
  def subscribe(sub: Queue[A], consumerGroup: Int): UIO[Unit] =
    subscribers.update { map =>
      map.get(consumerGroup) match {
        case Some(value) =>
          map + (consumerGroup -> (value :+ sub))
        case None =>
          map + (consumerGroup -> List(sub))

  private val loop =
    for {
      elem <- queue.take
      subs <- subscribers.get
      _    <- ZIO.foreach(subs.values) { group =>
        for {
          idx <- nextIntBounded(group.length)
          _   <- group(idx).offer(elem)
        } yield ()
    } yield ()

  def run = loop.forever.fork

object TopicQueue {
  def create[A](queue: Queue[A]): UIO[TopicQueue[A]] =
    Ref.make(Map.empty[Int, List[Queue[A]]]) >>= (map => UIO(TopicQueue(queue, map)))

In this part, immutability is what strikes us first. No explicit, side-effect modifications of a subscribers map can occur without our knowledge. Here we're using Ref from ZIO to store the map and perform updates.

It's worth mentioning that wrapping the constructor method in UIO is essential for consistency, as instantiating a new ZIO Queue should always be a part of our effect chain.


Our Exchange is pretty similar to the RabbitMQ exchange. The constructor simply creates three queues - the input queue for incoming jobs (jobQueue) and two output queues for routing (queueHip and queueKnee). What our exchange is also doing is unwrapping XRayImage from Request:

case class Exchange[A]() {
  def run = for {
    jobQueue       <- Queue.bounded[Request[A]](10)
    queueHip       <- Queue.bounded[A](10)
    queueKnee      <- Queue.bounded[A](10)
    hipTopicQueue  <- TopicQueue.create(queueHip)
    kneeTopicQueue <- TopicQueue.create(queueKnee)
    loop = for {
      job <- jobQueue.take
      _   <- job.topic match {
        case HipDiagnostic =>
        case KneeDiagnostic =>
    } yield ()
    fiber <- loop.forever.fork
  } yield (jobQueue, hipTopicQueue, kneeTopicQueue, fiber)

object Exchange {
  def create[A] = UIO(Exchange[A]())


Producing is simply done by supplying a provided queue with Requests. You might have noticed that the run method follows some patterns. Building asynchronous computations with self-explanatory schedules and a lazy execution is easy:

case class Producer[R, A](queue: Queue[Request[A]], generator: RequestGenerator[R, A]) {
  def run = {
    val loop = for {
      _    <- putStrLn("[XRayRoom] generating hip and knee request")
      hip  <- generator.generate(HipDiagnostic)
      _    <- queue.offer(hip)
      knee <- generator.generate(KneeDiagnostic)
      _    <- queue.offer(knee)
      _    <- ZIO.sleep(2.seconds)
    } yield ()

object Producer {
  def create[R, A](queue: Queue[Request[A]], generator: RequestGenerator[R, A]) = UIO(Producer(queue, generator))


Finally, the Program. Now we will combine all the previous components to assemble a fully operational clinic messaging system. First, we instantiate Consumers and launch them (reminder: ZIO Fibers are lazy, unlike Futures). Then it’s time for Exchange and Producer. Notice that returning tuples gives a  possibility to ignore the fibers that we don't need. Finally, we subscribe Consumers for the output queues and, importantly, define the ConsumerGroup with the launch:

val program = for {

  physicianHip             <- Consumer.create[Int]("Hip")
  ctxPhHip                 <- physicianHip.run
  (phHipQueue, phHipFiber) = ctxPhHip

  loggerHip           <- Consumer.create[Int]("HIP_LOGGER")
  ctxLoggerHip        <- loggerHip.run
  (loggerHipQueue, _) = ctxLoggerHip

  physicianKnee    <- Consumer.create[Int]("Knee1")
  ctxPhKnee        <- physicianKnee.run
  (phKneeQueue, _) = ctxPhKnee

  physicianKnee2    <- Consumer.create[Int]("Knee2")
  ctxPhKnee2        <- physicianKnee2.run
  (phKneeQueue2, _) = ctxPhKnee2

  exchange                                         <- Exchange.create[Int]
  ctxExchange                                      <- exchange.run
  (inputQueue, outputQueueHip, outputQueueKnee, _) = ctxExchange

  generator = IntRequestGenerator()
  xRayRoom  <- Producer.create(inputQueue, generator)
  _         <- xRayRoom.run

  _ <- outputQueueHip.subscribe(phHipQueue, consumerGroup = 1)
  _ <- outputQueueHip.subscribe(loggerHipQueue, consumerGroup = 2)

  _ <- outputQueueKnee.subscribe(phKneeQueue, consumerGroup = 1)
  _ <- outputQueueKnee.subscribe(phKneeQueue2, consumerGroup = 1)

  _ <- outputQueueHip.run
  _ <- outputQueueKnee.run

  _ <- phHipFiber.join

} yield ()

Also after launching TopicQueues with run, we can still subscribe to them.

Running the program

Phew... that was a lot, let's put it into the ZIO application and run it:

object Main extends App {
  override def run(args: List[String]) = program.as(0)

Looking into the logs we see that:

1. Multicast for all the ConsumerGroups within the hip topic works as expected - hip physician and HIP_LOGGER receive the same messages.

2. Within a single ConsumerGroup the messages are routed in a random manner (definitely field for improvement!):

[XRayRoom] generating hip and knee request
[Knee1] worker: Starting analyzing task 474
[Hip] worker: Starting analyzing task 345
[Hip] worker: Finished task 345
[HIP_LOGGER] worker: Starting analyzing task 345
[HIP_LOGGER] worker: Finished task 345
[XRayRoom] generating hip and knee request
[Hip] worker: Starting analyzing task 179
[HIP_LOGGER] worker: Starting analyzing task 179
[Hip] worker: Finished task 179
[Knee1] worker: Finished task 474
[Knee1] worker: Starting analyzing task 154
[Knee1] worker: Finished task 154
[XRayRoom] generating hip and knee request
[Hip] worker: Starting analyzing task 763
[Knee1] worker: Starting analyzing task 562
[HIP_LOGGER] worker: Finished task 179
[HIP_LOGGER] worker: Starting analyzing task 763
[Hip] worker: Finished task 763
[Knee1] worker: Finished task 562
[HIP_LOGGER] worker: Finished task 763
[XRayRoom] generating hip and knee request
[Hip] worker: Starting analyzing task 474
[Knee2] worker: Starting analyzing task 997
[HIP_LOGGER] worker: Starting analyzing task 474
[Hip] worker: Finished task 474
[XRayRoom] generating hip and knee request
[Hip] worker: Starting analyzing task 184
[Knee1] worker: Starting analyzing task 578
[Knee2] worker: Finished task 997
[HIP_LOGGER] worker: Finished task 474


Our simple, yet operational, program shows how to implement a message broker with direct and multicast behaviors.

Having chosen ZIO we have managed to unearth only a fraction of its potential - by using ZIO Queues and ZIO Fibers within effects. Out of the box parallelism, immutability, referential transparency, and wrapped side effect managing are what has made this example painless and really very enjoyable to write.

To see complete example see gist link below.






More about ZIO on our BLOG:

Cryptonomic NYC Hackathon part 2

The idea

It was the first time I’d ever taken part in a hackathon. I hadn’t been to any of these events before because I was very skeptical about them. I thought: how can we make anything useful in only two days? Well, it turns out that small, but handy tools can be created, even without sacrificing code quality. The key is to choose the right project; not too big or too complicated so you can complete it within a weekend. In our case, it was a Micheline Michelson translator that I’m going to tell you more about in this article.

The hackathon

Our hackathon took place on the first weekend of August (03-04.08). Cryptonomic is a startup which provides tools and smart contracts for decentralized and consortium applications. We had to use the Cryptonomic technology stack, tools such as ConceilJS (https://github.com/Cryptonomic/ConseilJS) during the hackathon. We decided to create a Google-like translator between Michelson and Micheline; two formats of source files used in Tezos software development. 

What are Tezos and Michelson?

According to the Tezos website:

“Tezos is a new decentralized blockchain that governs itself by establishing a true digital commonwealth.”

“Tezos addresses key barriers facing blockchain adoption to date: smart contract safety, long-term upgradability, and open participation”

Michelson is a domain-specific language that we use to write smart contracts on the Tezos blockchain. Unlike Solidity or Viper which must be compiled to EVM (Ethereum Virtual Machine) byte code to be executed on EVM, Michelson code itself gets to run in the Tezos VM.

Micheline vs. Michelson

First of all, Michelson is the specification and Micheline is the concrete language syntax of Michelson encoded in JSON. Before deployment to Tezos VM, Michelson is transformed into Micheline. 

For example, here is the same program in Michelson and Micheline representation:


parameter int;
storage int;
code {CAR;                      # Get the parameter
      PUSH int 1;               # We're adding 1, so we need to put 1 on the stack
      ADD;                      # Add the two numbers
      NIL operation;            # We put an empty list of operations on the stack


      "prim": "parameter",
      "args": [
          "prim": "int"
      "prim": "storage",
      "args": [
          "prim": "int"
      "prim": "code",
      "args": [
            "prim": "CAR"
            "prim": "PUSH",
            "args": [
                "prim": "int"
                "int": "1"
            "prim": "ADD"
            "prim": "NIL",
            "args": [
                "prim": "operation"
            "prim": "PAIR"

As you can see, there is a clear correspondence between Michelson and Micheline representation. Despite this, many people still find it challenging to understand the difference between Michelson and Micheline. That’s why our team decided to create this translator. Above all, we hope it is going to help other developers to learn smart contracts development in Tezos.

The Technology stack

To create our Micheline Michelson translator, we agreed to use Scala and Akka http for the backend side. At the frontend side, we used React, Redux, and Typescript.

The Coding

A conversion between the two formats is already a part of Cryptonomic tools. So, we needed to extract it from the base source code. After that, we decided to create a separate module for conversion which, we could then import into other projects, so as not to duplicate code.

A Micheline to Michelson translation has already been implemented in Scala using Circe, so it was quite easy to integrate it into our Scala-based project. However, the Michelson to Micheline conversion code is JavaScript. We tried to come up with our own Scala parser for Michelson. Unfortunately, it was too time-consuming, and we finally decided to use a parser from ConceilJS. We also chose Node.js for running the JavaScript code.

Michelin Michelson Translator

The Solution

In short, our solution consists of one frontend and two backend modules. 

Frontend module:


Translation module:


Console backend:


(There’s more detailed information about the modules in the Readme files, so there’s no point in duplicating the text)

Also, our team selected Heroku as the deployment platform.

Final application

You can try out our solution here: https://smart-contracts-micheline-michelson-translator-for-tezos.scalac.io/

On the left side, you paste the Micheline code and click translate to see the result. That simple!

The experience

In conclusion, it turns out that over only two days, it’s possible to create a small yet beneficial application. It was also an excellent opportunity to learn about some Tezos development tools. 

The Micheline Michelson translator was one of two projects that by Scalac. Check out the Frontend data visualization app that the other team made.