The bare-metal servers were not running properly and our client was wasting a lot of resources just trying to host their systems. The infrastructure had been set up in the pre-cloud era and some of the elements of the system had been set up in different hosting locations. This meant it was hard to manage access, and transfer data. The costs of server maintenance, getting new parts and keeping them all up and running at the same time was becoming more and more problematic and had reached the point at which it was directly influencing company stability. Scaling up the infrastructure was also hard because of physical limitations and deprecated software, which did not support automatic scaling. The client’s team were frustrated and wanted to improve things but without serious changes in the approach towards the host systems, only small improvements could have been made. Unfortunately, the infrastructure needed much bigger changes. This case study is a good illustration of how we deal with problems and deliver final solutions.
We investigated several different solutions. A private cloud was an option. However, this was rejected because private clouds are dedicated to a single client and this wasn’t possible in this case. The customer planned to create an infrastructure that could be used for multiple business targets in the future. After rejecting this option, a choice was made to go for AWS, mainly because of its leading position on the market.
The approach to “migrate” the client to a cloud had to take into account a few important risks:
▸ During migration, the system could go down for a maximum of 60 seconds only.
▸ There were more than 90 different servers with different operating systems (Microsoft and Linux) which had to be migrated in a strict order so as to keep availability high.
▸ The system was separated physically and would have to be unified in one solution supported by AWS.
▸ The database infrastructure was suboptimal and had to be redesigned before any migration could begin.
The process we designed was as follows:
1. Dev and test accounts were created in AWS to test progress of the migration.
2. In order to decide how the infrastructure should be designed, we had to create an architectural design diagram.
3. A comparison of the different hardware options was made (cloud vs bare-metal) to ensure the same farm of servers would be launched in AWS.
4. We started by creating an infrastructure in AWS:
▸ Networking parts (VPC, security groups, routes).
▸ EC2 fleet for services. ▸ CloudFormation templates to automate server deployments.
We worked daily and remotely with the client in a team of 2 (a senior developer from Scalac and Indeni’s tech lead) and coordinated using a lightweight approach of daily updates and constant interaction through a chat application and the occasional call and screen-sharing.
To achieve the goals we set in the project were included :
● Scalac’s Happiness Maker who could smooth out all aspects of the collaboration
● Time-limited development support from another senior scala expert
● One-shot in-depth brainstorming and design review from an expert pool made of 3 Scalac seniors (the project developer was included)
The milestones reached in the project:
● Converting the initial solution from akka actors to a fully streaming data system based on akka-streams.
● Reaching the status of feature-completeness regarding the specification.
● Increasing as much as possible the test coverage of the whole codebase.
● Simplifying the system to be modular and remove bottlenecks for an increase in performance.
● Applying well-known patterns and custom designs to make the system resilient to failure from ground-up.
● Fine-tune the configuration of all the moving pieces to reach the optimal operational level.
See how our team contributed to customers’ success.