There are plenty of frameworks you can base your application on in Scala, and every one offers a different flavor of the language with its own set of patterns and solutions. Whatever your preference is, we all ultimately want the same: simple and powerful tools enabling us to write easily testable and reliable applications. A  new library has recently joined the competition. ZIO, with its first stable release coming soon, gives you a high-performance functional programming toolbox and lowers the entry barrier for beginners by dropping unnecessary jargon. In this blog post, you will learn how to structure a modular application using ZIO.

Designing a Tic-Tac-Toe game

Most command-line programs are stateless and rightfully so, as they can be easily integrated into scripts and chained via shell pipes. However, for this article, we need a slightly more complicated domain. So let’s write a Tic-Tac-Toe game. It will make the example more entertaining while still keeping it relatively simple to follow. Firstly, a few assumptions about our game. It will be a command-line application, so the game will be rendered into the console and the user will interact with it via text commands. The application will be divided into several modes, where a mode is defined by its state and a list of commands available to the user. Our program will read from the console, modify the state accordingly and write to the console in a loop. We’d also like to clear the console before each frame. For each of these concerns we will create a separate module with dependencies on other modules as depicted below:

TicTacToe game ZIO

Basic program

The basic building block of ZIO applications is the  ZIO[R, E, A] type, which describes effective computation, where:

  •  R is the type of environment required to run the effect
  •  E is the type of error that may be produced by the effect
  •  A is the type of value that may be produced by the effect

ZIO was designed around the idea of programming to an interface. Our application can be divided into smaller modules and any dependencies are expressed as constraints for the environment type R. First of all, we have to add the dependency on ZIO to SBT build:

libraryDependencies += "dev.zio" %% "zio" % "1.0.0-RC16"

We will start with a simple program printing the “TicTacToe game!” and gradually expand it.

package ioleo.tictactoe

import zio.{console, App , ZEnv, ZIO}
import zio.console.Console

object TicTacToe extends App {

  val program: ZIO[Console, Nothing, Unit] =
    console.putStrLn("TicTacToe game!")

  def run(args: List[String]): ZIO[ZEnv, Nothing, Int] =
        error => console.putStrLn(s"Execution failed with: $error") *> ZIO.succeed(1)
      , _ => ZIO.succeed(0)

To make our lives easier ZIO provides the  App trait. All we need to do is to implement the run method. In our case, we can ignore the arguments the program is run with and return a simple program printing to the console. The program will be run in DefaultRuntime which provides the default environment with Blocking, Clock, Console, Random and System services. We can run this program using SBT: sbt tictactoe/runMain ioleo.tictactoe.TicTacToe.

Testing effects

ZIO also provides its own testing framework with features such as composable assertions, precise failure reporting, out-of-the-box support for effects and lightweight mocking framework (without reflection). First of all, we have to add the required dependencies and configuration to our SBT build:

libraryDependencies ++= Seq(
  "dev.zio" %% "zio-test" % "1.0.0-RC16" % "test",
  "dev.zio" %% "zio-test-sbt" % "1.0.0-RC16" % "test"

testFrameworks := Seq(new TestFramework("zio.test.sbt.ZTestFramework"))

Now, we can define our first specification.

package ioleo.tictactoe

import zio.test.{assert, suite, testM, DefaultRunnableSpec}
import zio.test.environment.TestConsole
import zio.test.Assertion.equalTo

object TicTacToeSpec extends DefaultRunnableSpec(

   testM("prints to console") {
     for {
       test <- TestConsole.makeTest(TestConsole.DefaultData)
       _ <- TicTacToe.program.provide(new TestConsole {
         val console = test
       out  <- test.output
     } yield assert(out, equalTo(Vector("TicTacToe game!\n")))

In this example, we’re using the TestConsole implementation, which instead of interacting with the real console, stores the output in a vector, which we can access later and make assertions on. Available assertions can be found in the Assertion companion object. For more information on how to use test implementations, see the Testing effects doc.

Building the program bottom-up

One of the core design goals of ZIO is composability. It allows us to build simple programs solving smaller problems and combine them into larger programs. The so-called “bottom-up” approach is nothing new – it has been the backbone of many successful implementations in the aviation industry. It is simply cheaper, faster and easier to test and study small components in isolation and then, based on their well-known properties, assemble them into more complicated devices. The same applies to software engineering. When we start our application, we will land in MenuMode. Let’s define some possible commands for this mode:

package ioleo.tictactoe.domain

sealed trait MenuCommand

object MenuCommand {
  case object NewGame extends MenuCommand
  case object Resume  extends MenuCommand
  case object Quit extends MenuCommand
  case object Invalid extends MenuCommand

Next up, we will define our first module, MenuCommandParser which will be responsible for translating the user input into our domain model.

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.ZIO

import zio.macros.annotation.{accessible, mockable}

trait MenuCommandParser {
  val menuCommandParser: MenuCommandParser.Service[Any]

object MenuCommandParser {
  trait Service[R] {
    def parse(input: String): ZIO[R, Nothing, MenuCommand]

This follows the Module pattern which I explain in more detail on the Use module pattern page in ZIO docs. The  MenuCommandParser is the module, which is just a container for the  MenuCommandParser.Service .

Note: By convention we name the value holding the reference to the same service name as the module, only with first letter lowercased. This is to avoid name collisions when mixing multiple modules to create the environment.

The service is just an ordinary interface, defining the capabilities it provides.

Note: By convention we place the service inside the companion object of the module and name it  Service . This is to have a consistent naming scheme  <Module>.Service[R] across the entire application. It is also the structure required by some macros in the zio-macros project.

The capability is a ZIO effect defined by the service. For these may be ordinary functions, if you want all the benefits ZIO provides, these should all be ZIO effects. You may also have noticed I annotated the module with  @accessible and  @mockable . I will expand on that later. For now, all you need to know is that they generate some boilerplate code which will be useful for testing. Note that to use them we need to add the dependency in SBT build:

libraryDependencies ++= Seq(
  "dev.zio" %% "zio-macros-core" % "0.5.0",
  "dev.zio" %% "zio-macros-test" % "0.5.0"

Next, we can define our  Live implementation as follows:

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.UIO

trait MenuCommandParserLive extends MenuCommandParser {
  val menuCommandParser = new MenuCommandParser.Service[Any] {
    def parse(input: String): UIO[MenuCommand] = ???

Though the implementation seems trivial, we will follow Test Driven Development and first, declare the desired behavior in terms of a runnable specification.

package ioleo.tictactoe.parser

import ioleo.tictactoe.domain.MenuCommand
import zio.test.{assertM, checkM, suite, testM, DefaultRunnableSpec, Gen}
import zio.test.Assertion.equalTo
import MenuCommandParserSpecUtils._

object MenuCommandParserSpec extends DefaultRunnableSpec(
            testM("`new game` returns NewGame command") {
              checkParse("new game", MenuCommand.NewGame)
          , testM("`resume` returns Resume command") {
              checkParse("resume", MenuCommand.Resume)
          , testM("`quit` returns Quit command") {
              checkParse("quit", MenuCommand.Quit)
          , testM("any other input returns Invalid command") {
              checkM(invalidCommandsGen) { input =>
                checkParse(input, MenuCommand.Invalid)

object MenuCommandParserSpecUtils {

  val validCommands =
    List("new game", "resume", "quit")

  val invalidCommandsGen =
    Gen.anyString.filter(str => !validCommands.contains(str))

  def checkParse(input: String, command: MenuCommand) = {
    val app = MenuCommandParser.>.parse(input)
    val env = new MenuCommandParserLive {}
    val result = app.provide(env)

    assertM(result, equalTo(command))

The  suite is just a named container for one or more tests. Each test must end with a single assertion, though assertions may be combined with  && and  || operators (boolean logic). The first three tests are straightforward input/output checks. The last test is more interesting. We’ve derived a custom invalid command generator from a predefined  Gen.anyString , and we’re using it to generate random inputs to prove that all other inputs will yield  MenuCommand.Invalid . This style is called Property-based testing and it boils down to generating and testing enough random samples from the domain to be confident that our implementation has the property of always yielding the desired result. This is useful when we can’t possibly cover the whole space of inputs with tests, as it is too big (possibly infinite) or too expensive computationally.

Access helper

In the test suite, we are referring directly to parse capability via the  MenuCommandParser.>.parse . This is possible thanks to the  @accessible macro we mentioned before. What it does underneath is to generate the helper object named  > placed within module‘s companion object with implementation delegating the calls on capabilities to the environment.

object > extends MenuCommandParser.Service[MenuCommandParser] {

  def parse(input: String) =

With our tests in place, we can go back and finish our implementation.

def parse(input: String): UIO[MenuCommand] =
  UIO.succeed(input) map {
    case "new game" => MenuCommand.NewGame
    case "resume"   => MenuCommand.Resume
    case "quit" => MenuCommand.Quit
    case _      => MenuCommand.Invalid

Lifting pure functions into the effect system

You will have noticed that parse represents the effect that wraps a pure function. There are some functional programmers who would not lift this function into the effect system, to keep a clear distinction between pure functions and effects in your codebase. However, this requires a very disciplined and highly skilled team and the benefits are debatable. While this function by itself does not need to be declared as effectful, by making it so we make it dead simple to mock out when testing other modules that collaborate with this one. It is also much easier to design applications incrementally, by building up smaller effects and combining them into larger ones as necessary, without the burden of isolating side effects. This will be particularly appealing to programmers used to an imperative programming style.

Combining modules into a larger application

In this same fashion, we can implement parsers and renderers for all modes. At this point, all of the basic stuff is handled and properly tested. We can use these as building blocks for higher-level modules. We will explore this by implementing the  Terminal module. This module handles all of the input/output operations. ZIO already provides the  Console module for this, but we’ve now got additional requirements. Firstly, we assume getting input from the console never fails, because, well if it does, we’re simply going to crash the application, and we don’t really want to have to deal with that. Secondly, we want to clear the console before outputting the next frame.

package ioleo.tictactoe.cli

import zio.ZIO
import zio.macros.annotation.{accessible, mockable}

trait Terminal {
  val terminal: Terminal.Service[Any]

object Terminal {
  trait Service[R] {
    val getUserInput: ZIO[R, Nothing, String]
    def display(frame: String): ZIO[R, Nothing, Unit]

However, we don’t want to reinvent the wheel. So we are going to reuse the built-in  Console service in our  TerminalLive implementation.

package ioleo.tictactoe.cli

import zio.console.Console

trait TerminalLive extends Terminal {

  val console: Console.Service[Any]

  final val terminal = new Terminal.Service[Any] {
    val getUserInput =

    def display(frame: String) =
      for {
        _ <- console.putStr(TerminalLive.ANSI_CLEARSCREEN)
        _ <- console.putStrLn(frame)
      } yield ()

object TerminalLive {
  val ANSI_CLEARSCREEN: String = "\u001b[H\u001b[2J"

We’ve defined the dependency by adding an abstract value of type  Console.Service[Any] , which the compiler will require us to provide when we construct the environment that uses the  TerminalLive implementation. Note that here again, we rely on convention, we’re expecting the service to be held in a variable named after the module. The implementation is dead simple, but the question is… how do we test this? We could use the  TestConsole and indirectly test the behavior, but this is brittle and does not express our intent very well in the specification. This is where the ZIO Mock framework comes in. The basic idea is to express our expectations for the collaborating service and finally build a mock implementation of this service, which will check at runtime that our assumptions hold true.

package ioleo.tictactoe.cli

import zio.Managed
import zio.test.{assertM, checkM, suite, testM, DefaultRunnableSpec, Gen}
import zio.test.Assertion.equalTo
import zio.test.mock.Expectation.value
import zio.test.mock.MockConsole
import TerminalSpecUtils._

object TerminalSpec extends DefaultRunnableSpec(
            testM("delegates to Console") {
              checkM(Gen.anyString) { input =>
                val app  = Terminal.>.getUserInput
                val mock = MockConsole.getStrLn returns value(input)
                val env  = makeEnv(mock)
                val result = app.provideManaged(env)
                assertM(result, equalTo(input))

object TerminalSpecUtils {
  def makeEnv(consoleEnv: Managed[Nothing, MockConsole]): Managed[Nothing, TerminalLive] = => new TerminalLive {
      val console = c.console

There is a lot going on behind the scenes here, so let’s break it down, bit by bit. The basic specification structure remains the same. We’re using the helper generated by the  @accessible macro to reference the  getUserInput capability. Next, we’re constructing an environment that we’ll use to run it. Since we’re testing the  TerminalLive implementation, we need to provide the  val console: Console.Service[Any] . To construct the mock implementation, we express our expectations using the  MockConsole capability tags. In this case, we have a single expectation that  MockConsole.getStrLn returns the predefined string. If we had multiple expectations, we could combine them using flatMap:

import zio.test.mock.Expectation.{unit, value}

val mock: Managed[Nothing, MockConsole] = (
  (MockConsole.getStrLn returns value("first")) *>
  (MockConsole.getStrLn returns value("second")) *>
  (MockConsole.putStrLn(equalTo("first & second")) returns unit)

To refer to a specific method we’re using capability tags, which are simple objects extending  zio.test.mock.Method[M, A, B] where M is the module the method belongs to, A is the type of input arguments and B the type of output value. If the method takes arguments, we have to pass an assertion. Next, we use the returns method and one of the helpers defined in zio.test.mock.Expectation to provide the mocked result. The monadic nature of Expectation allows you to sequence expectations and combine them into one, but the actual construction of mock implementation is handled by a conditional implicit conversion Expectation[M, E, A] => Managed[Nothing, M] , for which you need a Mockable[M] in scope. This is where the @mockable macro comes in handy. Without it you would have to write all of this boilerplate machinery:

import zio.test.mock.{Method, Mock, Mockable}

object MockConsole {

  // ...
  object putStr   extends Method[MockConsole, String, Unit]
  object putStrLn extends Method[MockConsole, String, Unit]
  object getStrLn extends Method[MockConsole, Unit, String]

  implicit val mockable: Mockable[MockConsole] = (mock: Mock) =>
    new MockConsole {
      val console = new Service[Any] {
        def putStr(line: String): UIO[Unit]   = mock(Service.putStr, line)
        def putStrLn(line: String): UIO[Unit] = mock(Service.putStrLn, line)
        val getStrLn: IO[IOException, String] = mock(Service.getStrLn)

The final program

You’ve learned how to create and test programs using ZIO and then compose them into larger programs. You’ve got all of your parts in place and it’s time to run the game. We’ve started with a simple program printing to the console. Now let’s modify it to run our program in a loop.

package ioleo.tictactoe

import ioleo.tictactoe.domain.{ConfirmAction, ConfirmMessage, MenuMessage, State}
import zio.{Managed, ZIO}
import zio.clock.Clock
import zio.duration._
import zio.test.{assertM, suite, testM, DefaultRunnableSpec}
import zio.test.Assertion.{equalTo, isRight, isSome, isUnit}
import zio.test.mock.Expectation.{failure, value}
import TicTacToeSpecUtils._

object TicTacToeSpec extends DefaultRunnableSpec(
            testM("repeats RunLoop.step until interrupted by Unit error") {
              val app  = TicTacToe.program
              val mock = (
                (RunLoop.step(equalTo(state0)) returns value(state1) *>
                (RunLoop.step(equalTo(state1)) returns value(state2) *>
                (RunLoop.step(equalTo(state2)) returns value(state3) *>
                (RunLoop.step(equalTo(state3)) returns failure(()))
              val result = app.either.provideManaged(mock).timeout(500.millis).provide(Clock.Live)
              assertM(result, isSome(isRight(isUnit)))

object TicTacToeSpecUtils {
  val state0 = State.default
  val state1 = State.Menu(None, MenuMessage.InvalidCommand)
  val state2 = State.Confirm(ConfirmAction.Quit, state0, state1, ConfirmMessage.Empty)
  val state3 = State.Confirm(ConfirmAction.Quit, state0, state1, ConfirmMessage.InvalidCommand)

And change the implementation to call our RunLoop service:

package ioleo.tictactoe

import ioleo.tictactoe.domain.State
import zio.{console, App, UIO, ZIO}

object TicTacToe extends App {
  val program = {
    def loop(state: State): ZIO[app.RunLoop, Nothing, Unit] =
          _         => UIO.unit
        , nextState => loop(nextState)


  def run(args: List[String]): ZIO[Environment, Nothing, Int] =
    for {
      env <- prepareEnvironment
      out <- program.provide(env).foldM(
          error => console.putStrLn(s"Execution failed with: $error") *> UIO.succeed(1)
        , _     => UIO.succeed(0)
    } yield out

  private val prepareEnvironment =
      new app.ControllerLive
        with app.RunLoopLive
        with cli.TerminalLive
        with logic.GameLogicLive
        with logic.OpponentAiLive
        with mode.ConfirmModeLive
        with mode.GameModeLive
        with mode.MenuModeLive
        with parser.ConfirmCommandParserLive
        with parser.GameCommandParserLive
        with parser.MenuCommandParserLive
        with view.ConfirmViewLive
        with view.GameViewLive
        with view.MenuViewLive
        with zio.console.Console.Live
        with zio.random.Random.Live {}

I’ve skipped the details of many services, you can look up the finished code in the ioleo/zio-by-example repository. We don’t have to explicitly state the full environment type for our program. It only requires the  RunLoop , but as soon as we provide  RunLoopLive , the compiler will require that we provide  Terminal and  Controller services. When we provide the Live implementations of those, they, in turn, add further dependencies of their own. This way we build our final environment incrementally with the generous help of the Scala compiler, which will output readable and accurate errors if we forget to provide any required service.


In this blog entry, we’ve looked at how to build a modular command-line application using ZIO. We’ve also covered basic testing using the ZIO Test framework and mocking framework. However, this is just the tip of the iceberg. ZIO is much more powerful and we have not yet touched the powerful utilities for the asynchronous and concurrent programming it provides. To run the TicTacToe game, clone the ioleo/zio-by-example repository and run  sbt tictactoe/run . Have fun!

Check out more articles about ZIO on our blog:

Scale fast with Scalac – Scala development company ready to solve all your challenges.

How to approach problematic areas using simulators and more

It’s no secret that mobile devices are taking over the world. Today, it’s possible to complete basically any operation, anywhere, within a few seconds – all thanks to smartphones. This market is constantly growing, and slowly outrunning laptops, not to mention other stationary devices.

With the growing number of mobile devices on the market, the problem of how to keep up with the needs of users and provide them with high-quality software is also increasing. To meet these demands, we need a specific approach. That’s why testing for mobile apps is a completely different topic than web application testing.

To automate or not to automate? That is the question!

I think testing mobile applications is a good candidate for automation. It often allows you to provide high test coverage. However, it can be time-consuming and not very profitable due to the specifics of a project.

For over a year, together with my QA team, we tested specific, short advertisements – each of them unique. There were at least 5 of them a day. The space for automation was also limited and covered only basic elements such as detecting html5_interaction or playback of subsequent video elements and detecting whether the game ‘install’ button was clicked.

Test coverage

Our tests covered a wide range of devices: iPhones, the entire range of Android phones, and Amazon devices.

The test coverage included:

Division by system:

  • Android from 5.1.1 to the newest ( it’s now 10.0)
  • iOS from 9.3.5 to the newest ( it’s now 13)

Division by type of devices:

  • Amazon (in this case one device was enough, e.g. Fire HD 8 Tablet)
  • Low-spec devices – e.g. iPhone 5s or Samsung J3
  • High-spec devices – e.g. iPhone 8+ or Samsung Galaxy J7
  • Wide aspect ratios – e.g. Samsung Galaxy S8+ or Google Pixel 2
  • Old iPad and New iPad – e.g. iPad 2 Air old and the new generation
  • Android Tablet – e.g. Samsung Tab S3
  • iPhone X’s – this type of device generates a lot of visual issues so had a separate test case as a type of device

What tests were carried out and what were the most common problem areas?

Testing for mobile devices is not the same as testing desktop applications, not only regarding the number of devices but also the methods of testing them and the focus areas.

Testing for mobile apps problem 1: Scaling

We focused on scaling and loading the ad. When the company logo or inscriptions were covered, the issue had high priority. The phone’s notch, for example on the iPhone X or wide aspect ratio devices such as Samsung Galaxy s8 +, was a big problem (e.g. the notch covering half of the name of the advertised place).

Testing for mobile apps problem 2: iPads

Tests on iPads generated a lot of errors, due to the fact that they have the ability to rotate 360 ​​degrees. For this reason, there were often problems with some missing parts of the images that were not fully displayed, lack of screen adjustment. This sometimes even resulted in the video stopping or the entire advertisement jamming. The problems were so frequent that iPad fixes on the dev side “ruined” the functionalities of other devices or were simply not feasible. After taking all of the conditions under consideration, especially the challenging time frames for our tests, we decided to lower the priority of iPad fixes.

Testing for mobile apps problem 3: Functional side

Functional tests were performed in various combinations. The most problematic area, it turned out, was “the background issue”. Going back to the app after putting it in the background made some of the mechanisms in the ad fail. Another thing was that functions were failing to shut down after switching them to the background – for example, the music from the video remained on. This was the most common thing that happened with the videos.

Testing for mobile apps problem 4: Open store

Going to the store or opening dedicated links were also very important. It was quite a challenge to check the ad specific availability in a given country because when an item is not available in your country, the Android (Google Play) store will simply display this information. However, it’s not as easy with the App Store. In Apple’s case, you will receive a blank page and no information about what’s happened. Which obviously is not what we want for our users to experience.

Testing for mobile apps problem 5: Performance

Performance tests were carried out, using one of the tools for testing mobile apps that I recommend – Charles Proxy – something which I will elaborate on later in the article. It helped to simulate the slowdown of the internet up to 512 kbps, but we most often used 3G which was enough to induce the performance problems we were analyzing.

Tools for testing mobile apps: Charles Proxy

So what is Charles Proxy? According to their website

“Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).

For me, Charles Proxy helps to monitor requests or to exchange the body of the request. For example, Xcode only has the simulator for iOS 9.3 but our tests had to be performed on iOS 9.3.5. So I had to rewrite the rule but to do that all I needed was to configure the file of the app and simply change the value in the body of the request

Note: To use simulators for iOS 9 you must have Xcode installed below version 11.1 because already in this version support system version for simulators is from 10.3.1 +

Charles Proxy: How to set it up?

To set up Charles Proxy in a way so it can read traffic between machines, all you need are a few, short steps:

  1. To Download Charles Proxy you need to go to: If you want to try it out first, there’s a free trial for 30 days.
  2. After installing and opening the app click on the tab Help -> SSL Proxying -> Instal Charles Certificate on a Mobile Device or Remote Browser
Testing for mobile apps - Charles Proxy - Certificate

Then we have information about the name and port of the proxy server, there will also be some info about the installation of the certificate from the site on your phone.

Where should you fill this data? It depends on the OS:

Android: Settings -> WiFi -> Manage network settings -> Show advanced options -> Proxy -> Manual -> Enter the Server Name and Port and click Save (example on Samsung Galaxy J3)

iOS: Settings -> Wifi (hold the given network) -> Configure Proxy -> Manually -> Enter Server and Port -> Click “Save” (iPhone 5s example) After entering and saving, go to to download the certificate.

3. The last thing you need to do is enable our SSL proxying to the wild card

Now you can see your traffic. Also, the one thing I mentioned earlier was, namely: slowdown the internet to check the performance of our ad or app – you can find this feature on Throttle Settings in the Proxy tab.


Note: For Android 7+ you need to add the XML file (or ask the developer for it) to your application along with the configuration file that allows you to monitor the connection. You can also find more information on how to do this in the documentation:

The basic test coverage for mobile devices

Buying every possible device to test the hell out of every application is quite expensive and requires you to always be up to date with new devices. If you have a tight budget and you’re not so great with time either, you have to consider which devices and systems are the most important for the tests. To decide the priorities, think about what systems are the most popular and then simply – test them. In the case of iOS, most users update to the latest version. Over time, Apple ceases the support of old versions of applications. Interesting fact: at this point on iOS 10.0.2 (e.g. iPhone 6s) there is no application that would allow us to record the screen.

In the table below you can see the usage of all iOS OS version:

Testing for mobile apps Adoptation trends

As far as Android is concerned, it is not as obvious as with the OS versions. There are still devices with Android 3 or 4 that people still use on a daily basis. On the plus side, versions of Android usually aren’t too different from each other. When there’s a bug it’s rarely found only in one version, usually, it occurs on most of the other systems too.

When we can’t use physical devices for our tests, we can use tools for testing mobile apps such as iOS Simulator or Android Emulator. From my experience, Apple’s Xcode Simulator is a very useful tool. In contrast to Apple, testing Android apps this way is way harder, and I would opt for using physical devices whenever possible. Why? I’ll explain in a second.


Now, let me tell you about the simulators and emulators that are available with Xcode for iOS and Android Studio for Android.

Simulator for iOS

IOS simulators have many options and can really reflect the real devices. Often, the bugs you can find on the simulator match the ones on the physical device. The simulator for IOS, just like a physical device, includes a silent button, sound buttons, locks, and home button. We have many types of devices to choose from and almost every OS version that has ever been released is available.

Tip: To use Charles Proxy for reading traffic with our iOS simulator we need to enable ‘macOS Proxy’ and Install Certificate on iOS simulator.

So if you don’t want to invest a lot of money in buying all types of devices, Xcode simulators will cover most of the basic tests.

Below there is an example of options in a simulator and other types of devices supported in Xcode:

Android emulator

Unfortunately, Emulators in the Android Studio generate a lot more issues than the iOS Simulator. In my experience, a lot of issues found on emulators do not occur on a physical device, so in the case of Android, it is better to buy a device or use device farms.

If you want to try it for yourself – to set up a new emulator you need to choose a new hardware type and OS version.

In the Android emulator, there are a lot of options to use, e.g.: the health of the battery, location, and type of network. It never harms to try all these out and see how they work for you. And if you don’t fall in love with it, just like I didn’t, check out Best device farms for ios and android


When it comes to testing whole systems, or even a few applications, which for financial reasons and time frames are not profitable to automate, you have to consider what physical phone resources you have, what your base test coverage may be and what tests the crucial ones for the application are. When you figure these things out, testing will be automatically much more effective and cost-effective.

Thanks for reading the whole article! I hope it provided a helpful dose of knowledge and helps you to find your feet in an era of the growing popularity of mobile devices and their testing.

Check out some other articles on testing on our Blog: