10 ways Java is getting better

10 ways Java is getting better

You might think of Java as outdated. It is, indeed, 27 years old, which is the equivalent of one century old in “IT years”. Its younger, fresher, hipper siblings — such as Scala, Kotlin, Closure, or Groovy — might be more dynamic and easier to use. But don’t discard Java just yet!

Andrei Micu, Senior Scala Developer at Levi9, has noticed that Java is evolving and using its younger siblings for inspiration and improvement. Andrei talks here about two projects that intend to evolve the JDK, with their associated JEPs (JDK Enhancement Proposals): Project Amber and Project Valhalla.

Let’s see some instances where Java is a bit behind its siblings and how syntax sugar and standardization simplifies and makes the code more efficient.


1. Using records like in Scala or Kotlin

POJOS (Plain Old Java Objects) make data encapsulation a bit difficult. They needed a lot of writing but other languages such as Scala and Kotlin came up with features that standardized and simplified the writing. The same output could be achieved with a case class for Scala or data class for Kotlin. And this is what Project Amber tries to change.

Project Amber started with JDK15 and did some syntax sugar by adding the concept of records that brought in the same feature from Scala or Kotlin.


2. Better Pattern Matching

Another interesting change brought on by Project Amber is pattern matching. Old Java is not very flexible with switches, but Scala is way ahead. For example, in the second Scala example below — you can switch in the same object for lists, for components of lists and you can extract from them.


3. Pattern Matching for instanceof

pattern matching for classes for instanceof

The concept of pattern matching for instanceof was added in the first release of JDK14. Previously, when you checked if an object was an instance of a class, an extra line was necessary for the class to be cast to its checked type. But since JDK14 you can simply write the name of the variable after the class and you don’t need that extra line. This is a small, neat improvement.


4. Pattern Matching for classes in switch

Pattern matching classes in switch

JDK17 brought on pattern matching in a switch. If you want to check an object for its class type, now it’s possible in switch statements too.


5. Increased Expressiveness

Increased Expressiveness JDK18

JDK18 came along with increased expressiveness. Not only can you now check classes in switch statements, but you can also check for particular values and classes in the same switch. Guards and nulls can also be added.


6. Introducing Record Pattern (Preview)

Introducing Record Pattern (Preview)

In the future, we will also have record patterns. Previously, if we got a record at this point you had to write several lines of code to access its members, but now you can do an instanceof point with the name of the internal fields and it’s going to extract them for convenient use, afterwards.

In switches you can do the same for cases
. This sort of expressiveness is going to deconstruct that point, which is something pretty new in Java, so that’s why it probably takes more time to cook.


7. Classes for the Basic Primitives (Preview)

We are now in the territory of primitive types. This is the focus of Project Valhalla. In Java, primitive types are enough to model almost anything. But anything that’s not on the primitive type list is derived from Object, and this is quite an issue. Scala and Kotlin have a different approach to this. Both languages use just one class and the compiler does the improvements for you.

However, Java does not have a class for primitive types, making it difficult to work with them seamlessly.

But this may change and primitive types might get a class. This class will be marked with a `Q` prefix in the JVM internals to signal it’s a primitive type.


8. Declaring Objects that don’t have identity (Preview)

Java wants to offer the possibility of making objects as primitive types. The first step is to give the possibility of declaring objects that don’t have identity.

But what does “having identity” mean? Long story short, when an object has identity, two instances of the same value are different. This is not the case for primitive types and this is why we say that primitive types do not have identity.

Furthermore, objects that don’t have identity are stored in the stack, while objects with identity are stored in the heap.

Value objects (or classes) don’t have identity. Let’s look at how they are declared.

Declaring Objects that don’t have identity

While the outline of the definition is still work in progress, there are some definition rules that have been already established. One is that the class should be implicitly final. The fields of the class are implicitly final and no constructor can call super. Also, value objects can have reference cycles, which means they can have references to another object of the same type.

What do we get in return? The objects are put on the stack and the equality will hold between two references to the same value..


9. Primitive Classes (Preview)

Primitive Classes

A primitive class is like a value class with the additional restriction that no field can be of the declaring class, so we don’t have cycles like we have for value objects.

What do we get in return? We get almost the same performance as primitive-types. They do not allow nulls. Some might say this is not a benefit, but those who had null-pointer exceptions might think otherwise.


10. Reified Generics (maybe)

While not yet official, Project Valhalla does drop a hint about tackling reified generics, in which they may also take a hint from Scala and Kotlin.

What is the issue in Java right now?
Well, when generic type objects are passed around, the information about the inner types is lost at runtime. This is called `type erasure` and it is a drawback of the JVM.

There is a workaround in Java, that requires you to send the class as a separate parameter. Definitely not elegant! But Kotlin and Scala have a better way of dealing with this.

Kotlin deals with this with a neat feature called “reified generics”. If you declare the function inline — meaning that the function content is copied at the call site — then you can have access to the inner types of the generics. When you write reified, it adds in the back an extra parameter which is the class of the generic type parameter. This way, you don’t have to write extra parameters.

Scala works around this in a different way. It makes use of implicit parameters to leverage the compiler’s ability to pass the class information.

Reified Generics

What do you think?

Do these Java improvements seem significant to you? Andrei Micu stresses that standardization is not easy. Java’s little siblings might be a bit lighter on standardization, which allows them to develop features faster. Java, however, is so widely used, that keeping backward and forward compatibility is crucial. Sometimes, this means holding back on innovation, in order to keep all of its users happy.

Let’s keep in mind Java is running a marathon, says Andrei Micu. Its younger sibling languages, with more spring in their step, will also benefit from JVM improvements and new features. It’s a win-win.

Related Articles

We’re supporting your race at your own pace. Choose yours!


Accelerate your career with Levi9

Accelerate your career with us

You pick the track, we’ll make you win.

Our passion lies in Formula1. We embrace the same attitude and spirit in our work culture. Only this time, you are only competing with yourself. Your game, your rules! We are the team that helps you get the prize. See exactly how it works.

Design your own development path


Take control of the steering wheel and accelerate your career

Your career might feel like driving a racing car. But you know it involves much more than speed. To become the best version of yourself, we empower you to create the impact you want to be delivered at your own pace. Resources will come along in no time.

You are the designer of your development path. The team will copilot your interests and goals, offering advice and guidance, keeping you on track with work-life balance.

Your career is the Grand Prix we’re cheering for

At Levi9, you are empowered with education, challenges, a voice, and a circuit to showcase your success. Every one of us has our own style, and we work in a state of flow: continuously growing, challenged but not burnt out, capable but not bored. Our services are not only a collection of technologies and activities. The real difference is made by how we do it. Always together.

LOCATION


ROMANIA

Iași, RO

47° 9′ 6.2136” N
27° 35′ 16.4904” E

COMPANY STATS


4

COUNTRIES

1067

LEVI NINERS

17

YEARS


We are based in Iași, with headquarters in Amsterdam, Netherlands. Some might say we are an IT service provider. But in reality, we are the IT service partner for our clients, transforming their business goals into our success in technology.  Our customers empower us with freedom over solutions and autonomy over the implementation. And we answer to their trust with transparency and ownership.

CIRCUIT MAP


We all visualize our careers as a race until we realize it is not a trophy we are after. It is the ongoing partnership to build a thriving career path, at our own pace. This is how we do it on the Levi9 circuit. Let’s share the track!

Performance

Performance in the flow of work

While you are in the zone, delivering impact day by day, all the processes and the team are set to channel your performance in the direction you want. By the way, the members of your race team are the Team lead, Delivery Manager, Department Manager, and colleagues.

Communities

Niners unite communities

Name your interest and like-minded levi niners will hop in. Find your community inside Levi9: from Cloud and Agile, all the way to parenting and biking. Discover all of them or build yours!

Customers

Excellent customer focus

Delivering a functional product is one thing. Delivering impact is a totally different one. Our mindset and processes are focused on understanding the customer, having a good laugh over a beer, and building solutions that directly impact the customer’s users and the business goals.

Workplace

World-class workplace

We are a World-class Workplace and Levi Niners themselves are the ones who evaluated us. The label is based on the employee score from the survey done by an independent employee satisfaction agency, Effectory.

Teamwork

Learn by sharing

Study groups, tech events, mentoring, and communities. The context is set for levi niners to grow their expertise by sharing or learning from their teammates. That’s our engine that never fails.

Career

Learn by doing

Fail fast, keep it agile, and innovate incrementally. Just break the routine by implementing something new. You have the time and resources to learn, but also space to test and scale new insights.

Experience

Proven track record

Working with titans, we take their business goals seriously. Our experience counts not years but times our customers succeed on the market thanks to our stunning and flawless tech solutions.

We’re supporting your race at your own pace. Choose yours!



Levi9 IT Services is upgrading the infrastructure for bicycles

Levi9 IT Services is upgrading the infrastructure for bicycles of two major high schools in Iasi

On World Bicycle Day, we empower kids’ rides! And we have chosen the bicycle as the main means of transport and upgraded the infrastructure of two of the largest high schools in Iasi.


In partnership with the organization Iași Orașul Bicicletelor, Levi9 installs modular bicycle stations at the “Costache Negruzzi” National College and the “Gr. C. Moisil” High School of Informatics, the first two high schools to have entered this modernization program. Starting June 3rd, the students of the two high schools will be able to use the stations that will protect their bicycles during classes.

We like to pedal and move freely! Both my colleagues and I, as well as the parents’ and teachers’ associations of both high schools who have supported us in finding a sustainable solution for the active transportation of their students. Thus, together with our community partner, Iași Orașul Bicicletelor, we developed a project that would exactly meet the urban mobility needs of the high schools involved in the project, ” said Anca Gafițeanu, Delivery Center Director of Levi9 IT Services Iași.

The whole Levi9 community embraced the commitment to sustainability, from global to local and individual levels. Our “One common goal” value is reflected not only in the work environment. Every one of us finds a way to follow the green initiatives according to our personal needs and passions.

In recent years, Levi9 Iași has supported several urban mobility projects necessary for our community. In the next period, other school institutions from Iași will be included in the modernization program.

We support cycling because it helps young people move freely and independently, challenging them to discover Iași. In the meantime, it allows them to observe the beauty and the architecture of the city they live in while choosing the shortest way home or to school,” said Costi Andriescu from Iași Orașul Bicicletelor.

The civic initiative belongs to the Iași Orașul Bicicletelor community, and the implementation was possible due to the investment made by Levi9 IT Services. Every year, the company is involved in transforming students’ social habits so that more and more bicycles are properly parked in the high schools of Iași. We also thank the representatives of two Parents-Teachers Associations of the “Grigore C. Moisil” Theoretical High School of Informatics and “Costache Negruzzi” National College Iași, with whose support the Green Zone for bicycles came to life.

Although our color is blue, we go green. Of course, one step at a time. Hopefully, by putting all the steps together, we will discover how far we’ve got down the sustainability road.


Azure Skies and Smooth Cloud Sailing

Azure Skies and Smooth Cloud Sailing

 

130 years ago, when our client first started, a cloud may have been a bad omen. It could have meant the impending approach of a storm preventing its ships from reaching their destination. Today? The cloud is what helps our clients make the best decisions, based on hard, consistent, relevant data. One of these vessels shipping BI services through the cloud is called Levi9, with a team helmed by Carmen Girigan, Senior Data Engineer, and Eliza Enache, Medior Data Engineer.


From Data to Information

If a business founded in 1890 is still thriving today, it must mean that it has adapted over and over again. More recently, this meant adopting a cloud-based data solution with the help of Levi9 and other cloud providers. Vroon, an international company with a vast tradition in naval transport, was looking for ways to optimize their data solution, in order to make better decisions. We picked up a legacy on-premises solution and transformed it into a personalized, highly-scalable Single Source of Truth in the cloud.

The main challenge was picking up a complex solution using external data sources, requiring maintenance and development on multiple modules, and offering limited aggregated results. To learn how the business was doing, Vroon business users had to look into multiple sources.

However, data is mostly useless unless it becomes information. But what does this mean?

“You might look at Data as the input that your brain perceives through the senses”, says Eliza Enache, one of the Levi9 “sailors”. “The brain then creates connections between these data points by applying certain logical processes (filtering, algorithms, etc.). The connections between data points (and connections of connections) might be called Information.

Raw data is useful only when it is organized and interpreted in such a way that it helps in decision making, becoming information” completes Carmen Girigan, also a member of the Levi9 team.

After working with Levi9, Vroon users interrogate one service and get the answers they need.

Eliza Enache Carmen Girigan

From SQL Server to Azure cloud solutions

The Vroon business had been getting information using on-site Microsoft technologies, relying heavily on SQL Servers. Data was coming in from various siloed sources such as accounting, procurement, invoicing, customers, vessel activities, incident reporting, and many others.

Data were extracted, transformed, and loaded into the warehouse, via Microsoft SQL Server Integration Services, as well as custom applications and SQL Server stored procedures. On top of the data marts, there were 5 analytical databases with 11 multidimensional OLAP cubes, developed with Microsoft SQL Server Analysis Services. These cubes would then be accessed via the reporting tool, for reports, dashboards, and analysis.

The cloud conversion started in 2020. The new solution was based on Microsoft Azure technologies, in order to streamline extract-load-transform (ELT) processes and maintenance, by using a gradual approach that would allow evaluation and constant adjustment to new situations, as well as high availability for the users’ new requirements.

Now, data is extracted from its sources via Azure Data Factory and loaded into an Azure Data Lake container. It then goes through a transformation process using Azure Databricks or Azure Synapse Analytics. Afterward, it is loaded into an Azure Synapse Analytics data warehouse, based on which a tabular model is developed with Visual Studio and later deployed to Azure Analysis Services. Power BI is used for data presentation and user self-service report creation.

Levi9 Cloud Technology Stack

The result was the transformation into a modern, cloud-based, and highly-scalable solution, that provides information in a unified way: a Single Source of Truth for multiple business areas.

Having a single source of truth is essential for enabling good business decisions. “Everybody sees the same figures, at the same time, and it’s more reliable” Carmen explains. “Long story short, you avoid conflicting ‘versions of truth’ when different users analyze data exported at different points in time”, adds Eliza.


Cloud sailors tip: Clean your data like a ship’s floor

Whenever thinking of a ship, one image inevitably comes to mind: sailors cleaning the deck, almost to the point of obsession. In the 18th century, this was done to keep the crew healthy.

Today’s Levi9 data engineers have a similar attitude towards data: it should be maintained clean, to avoid the risk of infecting the budget, reports, and any meaningful analysis.

In the cloud, resources are much easier to maintain and scale — but we do have to pay attention to costs. There is no point in having duplicated or unutilized data, or transformations, in the cloud”, explains Carmen. Eliza adds that there is much more than that. “During the testing phase, we noticed that redundant or obsolete data led to improper analysis and estimation, and diversion from the ‘main track’ — which in turn led to an increase in time and costs, while also blocking further developments. Some of these redundancies made us go back through the whole logic.”

Those who don’t empathize with nautical experience might compare data cleansing with their wardrobes: “Just think of that closet full of clothes you don’t like to wear, things you’ve kept for years only cluttering your space and making for a hefty baggage” says Eliza.


An ongoing adventure

Part of Vroon’s legacy system is still running and is in the process of being transformed. Eliza explains why sometimes this is a better solution: “We soon realized that converting and rewriting the whole logic, from old technologies into Azure, would take a lot of time – thus stalling any further development initiative. As such, we devised a temporary workaround: part of the data is directly brought to and processed in Azure, while another part is brought from the on-premises data warehouse, and further integrated into the unified model.”

Starting with a pilot

But we won’t do everything at once. First, we’ll do a pilot project”, adds Carmen. Just like sending a sentinel ship before moving the entire fleet, a pilot provides insight into what the job entails.

Migrating from on-premises ETL to Azure ELT

Fair winds and following seas

All in all, the project continues to be a learning opportunity for the team. All the Levi9 Vroon sailors have at least one Azure certification, as the department and company strategy aligned with the work. Carmen Girigan and Eliza Enache stress that they owe much of their learning to the other half of the initial engineering team that had established the foundations on which they continued to build in the cloud.

With the new pilot project, the adventure continues. May the team have fair winds and following seas!

Related Articles

We’re supporting your race at your own pace. Choose yours!


Rebuild this castle - a data migration story

Imagine yourself in a run-down house. Make it a big one. Huge. Some walls are still standing, but barely. Some rooms are just a pile of bricks lying in the middle of the floor. There are secret passageways connecting random chambers, almost like a labyrinth. Did you make the house gigantic in your mind? Go on, make it even bigger. A quirky, dysfunctional, enormous castle.

Here is your challenge: you need to rebuild the castle on another land. In its new version, the building should regain its splendor, while guests and inhabitants alike should be able to find their way seamlessly. Oh, and if you fail, even slightly, even by a fraction, all the inhabitants of the palace lose their livelihood.

Proceed.

This is the magnitude of the challenge that software architect Sebastian Gavril found himself in, when he was approached to migrate a fintech’s core databases to the cloud.

Tinka, a Dutch fintech company, already had some of its services in the cloud, such as microservices, authentication systems, message brokers, CI/CD, and monitoring tools. But the core services, related to the actual financial processes, remained on-premise, part of the so-called legacy system: the virtual machines, databases, the Enterprise Service Bus, and workflows.

The job entailed moving from on-premise into the cloud 30+Tb of data, split in over 30 Oracle database instances across multiple Exadata machines — mini data centers that can be rented out — and 7 environments.

In other words, the core services were still in the “old castle” and needed to be moved. All those discarded bricks, secret passageways — they were actually scattered databases on old servers, connected among each other in a complex network, being used by various applications, microservices, or monitoring tools.

So how does this story unfold?


The heroes

This job needed an architect. A software architect. “Normally, an architect first needs to understand his customers, their needs and desires, constraints and opportunities. Then he translates that into a formal language that can be used by engineers to construct the building his customers desire. And during the construction phase, the architect is always present to make sure the plan is followed and change requests from clients are included with minimal impact on the integrity of the building.”

This is what Sebastian did. He needed to have an ensemble view of the challenge, decide what’s needed, and get the team together. They were: Simona Gavrilescu (Delivery Manager), Ancuta Costan (Test Lead), and Monica Coros (DevOps Engineer), as well as employees of Tinka.

At Levi9, the team is defined together with the customer, with the aim of enhancing collaboration and working together as one team.


The castle

Moving core databases is an extremely complex project. When you move databases, you don’t just move the data, but you need to make sure that all the critical systems will function correctly.

Plus, the old castle — the legacy systems — were responsible for a large part of Tinka’s income. This is true for most companies.

While looking to achieve a positive impact for the clients, one of Levi9’s goals of this project was to have no negative effect on the Tinka customers. Customers were not supposed to notice anything. However, if things did go wrong, the company could have lost or corrupted customer data.


The new kingdom

So where was this new land where the castle was being built? The cloud.

Until recently, “the cloud” actually meant just a couple of key players, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. Nowadays, any large, established company starts to develop its own cloud: Oracle Cloud, IBM Cloud, Digital Ocean, Heroku, Alibaba. To attract customers, many offers features such as niche-specific services, better integration between cloud and legacy systems, and migration tools.

The chosen solution was the RDS – Relational Database Services, the database service offered by AWS, mainly because part of Tinka’s infrastructure was already in AWS. After migration, AWS would be the one keeping up with maintenance and updates and many more as part of the package.


The challenge

“It was imperative to understand the complexity, which I saw on two levels”, says Sebastian.

The first level of complexity is that of the database. In this case, the team used a tool provided by the AWS Data Migration Service (DMS) of AWS called the Schema conversion tool. “When you move a database from one place to another, this tool generates a scheme that shows the objects, triggers, procedures, and packages. It also shows which ones can be converted automatically and where manual changes need to be made, through simple, medium, or complex actions. This helps us understand what database-specific effort we need” explains software architect Sebastian Gavril.

The second level of complexity is that of the “landscape”. That means everything that calls on the database. The landscape was very rich. From old, legacy servers – that nobody really understood, to modern microservices, with everything in between: ESB, custom jobs, etc.

It helps us understand where the problems are. What we need to change: connection strings, drivers, maybe even code that used specific functionality in Oracle Enterprise Edition (an edition we wanted to move away from).


The map

So what does moving the castle entail? Well, when planned systematically, things look pretty straightforward: you need to move the bricks and you need to redesign the rooms. In data transfer language this means that you need to take into consideration two aspects: the applications and the databases.

On the applications side, there are some conversions to be made: for example update libraries, change versions, rewrite some code, and make sure the configurations are in order.

The databases also need to be converted, either by using tools such as the schema conversion tool or manually. For some, you can find a procedure. For others, they need to be built from the ground up.

For migrating the actual data, just like renting a moving truck, AWS has a Database Migration Service offer. You can rent a machine, point to the “departure” address, set an “arrival” address, and up you go. It can take minutes, hours, days. Or weeks, in one of our cases.


It all comes crumbling down - Part One

Every project starts with “it’s not much”. “It’s practically done”. Sebastian Gavril knows to cringe at this one. The first migration project was started 1 year before and this was how it was presented to the team. The first go-live attempt was possible about 2 months after these famous phrases, after a lot of tweaking and testing on the initial solution.

And this was just one room.

In such a complex data transfer project, issues keep popping up at every step, sometimes baffling entire teams. “The funniest thing was when we migrated a database and ran a simple smoke test to validate that everything is working. The test failed repeatedly so we spent the whole day trying to fix the issue, only to roll back in the end. The second day we realized that we were simply testing the wrong way. We were probably a bit too nervous.”


Slaying the budget monster

Every castle needs its own monster and this was no exception. As one of the key issues of the project was to lower costs, the budget was the monster that needed slaying and slaying. “Traditionally, software engineers didn’t have to think about costs too much”. Having his own very healthy financial mindset for expenses at home, Sebastian Gavril explains why he took care to chip away costs at every opportunity. “With the rise of cloud providers, engineers are using cloud services more and more and taking most of the decisions in regards to the configuration of those resources. For example, the biggest databases you can order at the click of a button in AWS are ~80 times more expensive than what we chose for one of our medium workloads.”

For example, by keeping an eye on resource metrics for a database, the Levi9 team was able to cut costs by about 75% from initial estimates.


Time to say good bye

Some rooms of the castle simply got left behind and it was for the better.

One of the biggest surprises of the Levi9 team was when it realized it could take some smart decisions. The goal was to stop using our data center, not necessarily to migrate everything.

“We found out that the largest database, a 15 TB behemoth, was actually what we call “highly available”. That means, there are 2 databases instead of 1. The second one is there only if the first one crashes for various reasons. But this was not a business-critical database, so we did not need to have the db up and running 24/7. So we decided we can just drop one of the databases and rely on backups to restore the primary database in case of failure.”

It was only because of the Levi9 team’s culture of focusing on one common goal that enabled it to know the business and the client so well, that they were able to take such surprisingly easy, but important steps.


It all comes crumbling down - Part Two

And then there was the huge data latency problem.

Levi9 practically moved the databases from the premises in Zwolle (Netherlands) to Dublin (Ireland). Some apps were left in Zwolle (Netherlands), to be migrated later, but they were calling on the database in Dublin.

“Data transfer is pretty fast nowadays, so normally this extra distance is not noticed. But due to the way one of the applications was written, the data was going to and from the database much more than needed. It became unusable, as some common operations were now taking tens of seconds to complete”

Because of the architecture of this application, it was like going to the castle for a lamp, then for the bed, then for the cover. The team was on the brink of giving up because it was not able to change the application. In the end, they managed to tweak the infrastructure so that it would allow for much more traffic than usual. It was like sending 3 people to the castle each for one thing only – not ideal, but it worked.


The welcome party

For a while, you will live between castles. You will still have the old one, but also the new one, perfectly in sync. And then?

And then you just need to make up your mind and throw a welcome party!

In data migration, the welcome party is extremely tricky. The inhabitants must not notice any difference, almost like they would be teleported. The service cannot be down for more than 15 minutes. The data about each Tinka client must be precisely the same. And, God forbid, DO NOT, under any circumstances, mess up their bank account. Good luck!

When it was time for the last final move, Sebastian Gavril did what he always does. He planned for it. Sebastian Gavril and his team had a step-by-step, second-by-second to-do list.

How detailed should this plan be? “Just to give you an idea”, says Sebastian Gavril, “we prepared so thoroughly that one of the go-live plans had 66 tasks, each representing an action.”

There are hundreds of small decisions that need to be taken before the key moment: for example at what time do we go live? And there are numerous connections that should be made between databases and services, all with the purpose of making a seamless transition, with as little downtime as possible. Maybe most importantly, what to do not if, rather when things go wrong.

Sebastian Gavril and his team split the very detailed action plan into three: before go-live (checking everything is in order at the new site), during go-live (making sure that each team member knows what to do at each moment) and after go-live (monitoring the new setup and being ready to take action if needed).


The happy ever after

It all ends well, as it should.

For a result-driven, impact-delivering, change-making company like Levi9, failure was not an option at any point. No data was lost. Customers of Tinka might have noticed a two to a three-minute website glitch, but nothing more.

“Having managed to deliver this feat was a great self-accomplishment. It’s also the kind of project that challenges and empowers a senior engineer to grow, because it involves many aspects that are not part of the day-to-day life: coding, database administration, database development, cloud, networking, DevOps, architecture, third-party management, finance. A very good opportunity to develop as a well-versed engineer.”

To move critical systems, to make everything run better, to significantly increase budget savings, all the while protecting the client from any negative consequence — this could all be summarized in what Levi9 wants to achieve: impact.

Related Articles

We’re supporting your race at your own pace. Choose yours!


Stand with Ukraine

Stand with Ukraine

Levi9 is shocked by the developments in Ukraine. Our first priority is the safety of our employees and their families, we are in close contact with our teams in Kyiv and Lviv to provide concrete assistance. We follow all developments closely and we discuss the situation and relevant measures on a daily basis. Our thoughts are with you #StandWithUkraine #ukraine


CodePipeline for Serverless Applications With CloudFormation Templates

CodePipeline for Serverless Applications With CloudFormation Templates

The CI/CD process of an application is crucial to its success and having an easy way to maintain said process can lead to an even healthier lifecycle for the application. Enter infrastructure as code: a reliable method of managing and provisioning pretty much anything you would need in a CI/CD pipeline to get it going, through the use of templates and definitions. Ever needed to keep the configurations of an AWS pipeline somewhere so you don’t need to remember the clicks from the Management Console by heart? Or maybe you wanted to give a working example to a colleague, so they can build their own pipeline. These problems and many more can be solved through infrastructure as code and CloudFormation if we’re talking AWS.

In the following lines, we’ll go through everything you need to create your own pipeline, connect it with other pipelines and maintain them only by running a not that complicated bash script. And by the end, you’ll probably come to realize how awesome infrastructure as code is (no need to thank us).


CodePipeline using Cloudformation

Building a pipeline in AWS’s CodePipeline is pretty simple and conceptually similar to other CI\CD tools out there, but it’s also quite verbose, requiring a considerable amount of code, especially if you want to build your pipeline in an infrastructure-as-code approach.

From a high level there are 5 main types of resources we need in order for us to put together an AWS Codepipeline:

1. An S3 bucket resource — where our code artifact will be stored
2. A CodePipeline resource— this will model what steps and actions our CodePipeline will include and execute
3. IAM Roles — one role that CodePipeline will assume during its execution, in order to create/update/deploy the resources in our codebase. A second role is used by a CodeCommit webhook(see #5).
4. CodeBuild Project(s) — are used by the CodePipeline resource to execute the actual commands we want in our pipeline.
5. An Event Rule — an AWS Event Rule that will act as a webhook triggering the pipeline on each master branch change(this is only required when working with CodeCommit. If you use Github or other supported repo providers there are build-in webhooks)

Now we’ll go over each of the resource types and then we’ll put it all together in a complete CodePipeline.yml definition for a serverless application built on AWS Lambdas using the Serverless Framework(but not limited to these).

S3 bucket definition
You can consider this the code artifact store for our pipeline. This will be referenced and used later on by the pipeline itself to first upload the artifact and then use it to deploy the resources.

The definition is pretty straightforward:


CodePipeline definition

This will be the bulk of our pipeline definition code that will define what are the stages of our pipeline.
The top-level properties of CodePipeline resource definitions are:
1. ArtifactStore — this is where we’ll reference the S3 bucket that we’ve created earlier. We will do this by using AWS’s !Ref intrinsic function
2. RoleArn — this is where we’ll reference the Role that the CodePipeline will assume during its run. We’ll do this by using the !GetAtt intrinsic function
3. Stages — a list of actions that our pipeline will do

A high-level view over our CodePipeline definition will look like this:

Notice that, for the sake of simplicity, we’ve left out the definition of the actual stages, which we’ll cover later on. But as you can see, in our case, for a serverless application, the stages of our pipeline will be:

Source — retrieving the sources from a supported git repo(CodeCommit, GitHub, Bitbucket) or an S3 bucket(this is useful if you’re not working with one of the supported git repos — but this implies having to upload your code to the bucket by your own means). We’ll stick to using a git repo since this is the most common scenario.
Staging — reference to the Deploy-to-Staging CodeBuild project which will contain the actual deploy commands.
Promote to Production manual approval gate — this will prevent an automatic deploy to Production each time the pipeline runs.
Production Deploy to Production — reference to the Deploy-to-Production CodeBuild project which will contain the actual deploy commands.


CodeBuild project

CodeBuild is the equivalent of build runners from other CI\CD tools. In our case, we’ll use their ability to run CLI commands in order to deploy our application. The definition of a CodeBuild project that deploys our serverless + node.js app to the staging environment/stack looks like this:

The most important part of a CodeBuild project is its buildspec.yml file which defines the actual CLI commands that the project will execute. You can see it being referenced in the Source property. The buildspec.yml file(located in the root of the project) looks like this:

Putting all the pieces together here’s what the definition of the stages inside our CodePipeline definition will look like:


IAM Roles

As mentioned earlier, we’ll create 2 IAM roles. One to be used by CodePipeline itself and one used by the CodeCommit webhook that we’ll create in a future step.


CodePipelineServiceRole

An IAM role is just a collection of policies or access rights that a certain resource is allowed to have. In our case, this is the place where we define the access limits of the CodePipeline. This is important because, according to the AWS Well-Architected framework, a resource should be limited to accessing only the resources it’s supposed to access, as per the principle of least privilege. In other words, we should avoid defining IAM policies that give unnecessary privileges, like the one below:

and instead strive to limit the access boundaries through policies that look like this:

There is no silver bullet when creating a CodePipeline IAM role because the policies(access rights) of the role will differ based on the actual resources that you use in your project. There are some policies that will be required regardless of your setup as you can see below:

You can use the above definition as a starting point. Run your pipeline and add more permissions to the role based on the error messages received from AWS. Does your setup contain lamdas? Then probably you should add some lambda permissions. Dynamo db? Then add the necessary dynamo db permission. It’s a bit of a tedious process but it will add to the security of your environment.


CodeCommit webhook(optional)

At this point, our CodePipeline is ready. The only thing missing is its ability to run on every change of the master branch. For this, we need just two things: an Event Rule and a Role. The “webhook” we’re creating is actually an Event rule that listens for CloudWatch events emitted by CodeCommit and triggers our pipeline whenever there’s a change in the master branch(or any branch, for that matter). Fortunately, these are less verbose and look like this:


Deploying the pipeline

At this point, you should have everything ready and we can deploy the pipeline. This is pretty straightforward and can be done by running the following command:

$ aws cloudformation deploy –template-file .yaml –stack-name –capabilities CAPABILITY_NAMED_IAM –region

Note: the — capabilities CAPABILITY_NAMED_IAM is just an acknowledgment that you are aware that the template file will create named IAM roles.


The Serverless Framework catch

If you’re using The Serverless Framework, there are some changes that you have to do in the serverless.yml file. Normally, your serverless.yml file looks something like this:

Notice the profile:default property — this dictates which user/profile will be used by The Serverless Framework. The default profile is usually taken from the .aws/credentials file or environment variables. The default profile is usually an admin user with full privileges.

But once the pipeline tries to deploy the serverless framework stuff, the default profile means nothing, because there are no credentials config files in our pipeline.

So we have to make use of the cfnRole property offered by The Serverless Framework. This property accepts an IAM role ARN as value and uses it when deploying the AWS resources. So we just have to put the ARN of the role we’ve created earlier in the cfnRole property, remove the profile property and we should be set. (This means, that we’ll need to deploy our pipeline template in order to create the role, find its ARN, and update the serverless.yml file)

See below the cnfRole property working along with the profile property by using the serverless-plugin-ifelse. This makes it work on staging/production environments/stacks(when CodePipeline does the deployment) as well as development stacks(when you want to deploy your stack from your development machine).


Multiple CodePipelines using AWS Templates and Nested Stacks

We have seen how we can declare various resources and tie them together into a fully functional, independent pipeline. But what if we want to build a much larger template by combining multiple smaller ones? Or do we want to group similar resources, such as roles, lambda configurations, or policies? AWS’s answer to these questions is simple: Nested Stacks.

Nested Stacks are stack resources part of bigger stacks. They are treated as any other AWS resource, thus helping us avoid reaching the 200 resources limit of a stack. Also, they offer functionalities such as input and output parameters, through which the stacks communicate between themselves. In addition, the use of Nested Stacks is considered a best practice, as it facilitates reusability of common template patterns and scales well with larger architectures (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html).

So how can we make use of these fancy, well-regarded, Nested Stacks? Simple! Treat each of the 3 pipelines as an independent nested stack. Since they don’t contain that many resources (now or shortly, for that matter), segregating them by type doesn’t offer that big of an advantage and thus, the nested stacks won’t even require the use of input and output parameters, since they contain everything they need inside their definition. The 3 nested stacks would reside under a parent one, used as a single point of access whenever changes are made to any of them.

What needs to be done, then? There are 3 steps we need to follow to build our template with the use of nested stacks:


Define a root template

This file will reference all the other templates using the physical path:

This file is a simple collection of AWS::CloudFormation::Stack resources, referencing each template through its physical path. Here, you can also define the input/output parameters mentioned earlier, which are values passed between stacks when they need to communicate, but for us, it’s not the case.


Package the template

Once we’ve finished configuring our Nested Stacks setup, we can start combining them. Currently, Amazon only supports combining templates into larger ones using S3 buckets. The local files are uploaded to an existing bucket (passed as a parameter) and a new file is generated, this time containing not the physical path, but the S3 location URL

Running this command:

$ aws cloudformation package –template-file /root-template-file.yml –s3-bucket bucket-name –output-template-file my-packaged-template.yml

  • — template-file — the root template file, containing the physical paths
  • — output-template — the name and location of the newly generated template, containing the S3 paths
  • -s3-bucket — the name of the bucket used for packaging the files
    will result in something like this:

Note: The S3 bucket used for holding these files needs to already be created, before the execution of the “package” command. Also, one needs to be careful not to include the definition of the deployment bucket inside any of the templates, since this would lead to a circular reference.


Deploy the template

Once the output template is generated, we can safely deploy it. CloudFormation will look for the specified files in the S3 bucket and create/update the root stack and, implicitly, the nested stacks. If the stacks already exist, they are evaluated based on changesets and if any differences are found, CloudFormation updates only the ones that were modified.

The deploy command goes like this:

$ aws cloudformation deploy –template-file /my-packaged-template.yml –stack-name my-stack-name –capabilities CAPABILITY_NAMED_IAM –region my-region

And that’s about it. You should now have 3 different pipelines created by your template. Not the smoothest process, but pretty straightforward, nonetheless. A possible solution could be automating this whole endeavor through a script. In the following section, we will see exactly how we can achieve this.


Automating the package/deploy process

As we’ve seen earlier, there are a couple of steps that we need to do to get our nested stack template packaged and deployed. Unfortunately, we have to go through all of the above processes each time we modify our pipeline.

Besides being cumbersome, doing all of these steps manually is not recommended as it is prone to errors. After all, you’re creating an automated CI/CD pipeline to reduce the amount of work you have to do, not add to it. If you’ve reached this point and you’re asking yourself “do I have to do all of that every time I want to deploy my pipeline?”, then don’t worry because the answer is no. But how can we avoid this hustle and automate the entire process? The solution? Bash scripts to the rescue!

Using a bash script, we can achieve the same result as manually deploying the pipeline(s), without giving ourselves a headache. Take a look below at an example of a simple script that does everything we need:

While this would probably work just fine (assuming the bucket exists and the template is valid), it’s a good idea to follow certain conventions regardless of the programming language you use. Let’s take a look at how we can improve our script a bit:

The above bash script does a couple of things:

Make sure the bucket exists
Because of the way nested stacks work, we need to have an S3 deployment bucket where our templates will be stored to be later used by the root stack in the deployment process. Therefore the first thing we need to do is to ensure that the bucket exists. The head-bucket command (aws s3api head-bucket –bucket $bucketName ) is perfect for this because it determines whether the bucket exists and if we have permission to access it.

Validate the template
The next step is to make sure that the template that we’re going to deploy is valid. To do this we can use the validate-templatecommand (aws cloudformation validate-template — template-body $pathToTemplate) which, if the template is not valid, will return a error message detailing what is wrong with it. Once we confirmed that the template is good, we can move forward and deploy it.

Package the template
The aws cloudformation package — template-file $pathToRootTemplate — output-template $pathToOutputTemplate — s3-bucket $bucketName command returns a copy of the root template, replacing the references to the local template files with the S3 location where the command uploaded the artifacts. So basically, it sets us up for the next step, the actual deployment.

Deploy the pipeline(s)
After all this setup, we can finally deploy our pipeline(s). We do this with the deploycommand (aws cloudformation deploy — template-file $pathToOutputTemplate — stack-name $stackName — capabilities CAPABILITY_NAMED_IAM — region $region), which uses the template that was generated by the package command in the previous step to create our (pipeline) resources. If this step succeeds, the pipeline(s) resources will be created in the specified stack.

And that’s it. We now have a script that does all the heavy lifting for us. All that’s left is to add the script to your package.json’s scripts section and you’re all set.


Conclusion

Quite a ride, wasn’t it? We’ve seen how to write the definition of an AWS pipeline and all its components, we’ve rigged a bunch of them together using CloudFormation and Nested Stacks and finally, we’ve automated the whole process through the use of a bash script. Hopefully, all of this came in handy and helped you avoid too many of those pesky configuration item changes on AWS when building your pipeline (guess what gave us an unusually hefty bill at the end of the month).

If you have any feedback for the article or the code presented in it, please send us an email. Every thought and idea is appreciated.

Thanks for reading and happy coding,

Andrei Arhip
Tudor Ioneasa
Andrei Diaconu, .NET Developer @Levi9

Related Articles

We’re supporting your race at your own pace. Choose yours!


The architect mindset. My example of how to embrace a technical challenge

The architect mindset.
My example of how to embrace a technical challenge

From newbie to experienced .NET developer and all the way to .NET Architect, this is my 14-year long career at Levi9. In some cases, I’m even called an expert. These are big words; I know, but I see myself more as a person who has a huge curiosity for understanding how things work and how I can use them to craft solutions.

Slowly but surely, I have turned my knowledge and experience into the mindset that defines me today, that helps me come up with answers, and fuels my professional growth. Technical challenges feed a developer’s mind. Let me share with you how I learned to deal with the pressure of finding a solution that could change the entire course of your project.

By the way, I’m Ionut and in this article, I summed up my path of growing and excelling as a .Net Architect at Levi9. Take the best out of my examples of how my teammates and I succeeded in overcoming a technical challenge under strict deadlines.


Relying on technical concepts more than on tools

At the beginning of my career, I focused all my efforts into mastering the platform I was coding against, .NET. Later, I was intrigued by other platforms which could help me as a .NET developer.

After a couple of years, I started to realize that programming languages are just simple tools. More meaningful and useful for my career was the understanding of abstract technical concepts. No matter if we are talking about a simple concept, like a design pattern or the reason why we have value type or reference type, to some of the heavy ones, like how web servers function or how the internet works nowadays.

” I appreciate the feeling that I have so much space to learn and experiment. It feels like I can make a difference.”

Understanding and relying more on concepts and techniques enabled me to adapt faster to such a dynamic and agile domain like software development. Besides that, I learned to never overlook or overpass a line of code, a concept, or a pattern I do not understand at first. We have to question everything, to understand how it functions and impacts the overall project.

Small hint: if I do not have time to do this on spot, I will make a note of it and revisit it later. This is one of the most helpful tricks I discovered along the way.


Two phases of a project build one great specialist

You can imagine that in 14 years there were quite a few customers I engaged with at Levi9. However, in the past 6 years, the main project I’ve dedicated my energy to is a big e-commerce fashion website from the Netherlands. Why this project? I have so much space to learn and experiment there. It feels that I can and I do make a difference.

At first, our goal was to migrate their old monolithic application into a modern, slick, and adaptive cloud microservice platform.

This phase of the project was all about experimentation, failing fast, learning, and adapting on the spot. This period was one of the most rewarding, during which I got exposed to many concepts, programming languages, and live website situations. Each of them was a learning event in itself.

In the last 2 years, we entered the growth and expansion phase of the project. Our platform got stable. The lockdown and the new world created by the pandemic times put a lot of stress on e-commerce websites, including ours.

And here is the jackpot! All the hard work from the past years paid off, and we were able to sustain the huge traffic. If the previous phase was all about experimenting and learning, this one was all about being faster and stable at the same time. Everything we build now reaches millions of users, so it must be agile and robust.


Rewriting the rules to deliver a fully-functional feature

I had my fair share of technical challenges. Such as the ones described above, which required the knowledge and efforts of each member of our team. Another one I can recall was pretty recent, and I had fun designing it.

Our product managers asked us to transform a PDF into a flipbook catalog. Seems easy but the challenge came from the fact that we had a short period of time to deliver it; without involving specialized developers or spending too much money on customization, since we weren’t sure about its business outcome.

So, the main idea was to reuse, integrate, and aggregate the already existing infrastructure into the project so that we can keep the costs low and respect the tight deadline.

We broke the problem into exact steps:

Transforming the PDF into a website.

Hosting it without worrying about traffic, availability, or monitoring.

Building a CI/CD pipeline to bring up new versions.

Once we had all those steps defined, we started to look for ways to obtain them based on what we have already. For transforming the PDF into a website, the bespoke option was out of the question since the time to market would have been a deal-breaker.

So why not find a partner that can do it for us? If you Google it, you can see that there are plenty of options. Based on our needs, we pick one provider, and the first checkmark was ticked. We got ourselves a nice html5 website based on a PDF and a credit card.


Going rogue, going serverless

Our internal services are mainly running in Docker, in a Mesos/Marathon platform setup. Although the setup is stable and scalable for most of the daily needs, it wasn’t the best option for this task. We still had to provide the right monitoring, alerting, scalability.

We looked further, and we agreed that we need to go serverless. Our final pick was serving the static website from Cloudflare workers. Mainly because the setup for Cloudflare was easier to adjust for receiving live traffic on a specific custom URL.

” With minimum development effort, without impacting the regular business development funnel, we delivered a fully-functional feature.”

The only problem we still had was to fix is the CI/CD pipeline. And here we had our first doubt that we will make it in time.

Since the current CI/CD pipeline was all about Jenkins, Docker images, and custom Python scripts, adjusting it would cause big trouble for our development budget and time to market. We started looking at a different way of doing it and our salvation came in the format of GitHub Actions and the Cloudflare Wrangler CLI tool. So, with just a commit we were able to trigger an entire deployment.

With minimum development effort, without impacting the regular business development funnel, we delivered a fully-functional feature. One that the customer can experiment with, and decide what are the next steps.


Wrapping up the takeaway, not my journey

Technical challenges like these have kept my developer’s spirit alive over the years. They have shaped my career path in a way I couldn’t have planned better myself.

I’m well aware that my knowledge and decisions impact businesses worldwide. It can be pretty frightening sometimes. But when I see the consequences of my actions in real-time, resulting in the success of our customers, it gives me the courage to keep exploring and experimenting.

Related Articles

We’re supporting your race at your own pace. Choose yours!


cloud loyal agnostic guide

When to choose Cloud Loyal or Cloud Agnostic

Get our tips for getting the most out of your cloud strategy

Do you have a good grip on the benefits and challenges of locking into one cloud service provider versus shifting between cloud platforms?

It’s a tough decision and one that’s worth getting right from the very beginning. That’s why our Levi9 cloud experts have put together a guide to walk you through the pros and cons of being cloud loyal and cloud agnostic. Plus, we’ve laid out your first steps for building an optimized cloud strategy. 

Here’s what you’ll find in this cloud guide:

• Guidance for navigating today’s cloud solutions
• Insights into the advantages and disadvantages of being cloud loyal or cloud agnostic
• Clear and concise definitions for the options that you’ll come across as you explore the latest developments in cloud technology
• Top three considerations for strategic cloud decision-making
• Why most enterprises are adopting a multi-cloud strategy
• Levi9 expert advice for leveraging your capabilities

Levi9 Guide Cloud Loyal vs Cloud Agnostic

Get your cloud strategy right

The cloud is booming. A 2020 forecast from the International Data Corporation (IDC), shows total worldwide spending on cloud services will surpass USD 1.0 trillion in 2024. Disruption and uncertainty have accelerated digital transformation in recent years. For many companies, that can mean migrating to the cloud or being to build cloud-native applications. 

But, this explosion also comes with a lot of hype. Levi9 experts are de-mystifying the cloud and cutting through the noise. Our guide includes a clear explanation of what it means to be cloud agnostic and cloud loyal. You’ll also gain practical advice to help you make cloud decisions that will truly benefit your organization. 

About the author

Software Architect, Cloud expert, DevOps advocate. Stevan is experienced in building and running high performance and distributed solutions. He has over 10 years of experience in software design and development – Java, Go, Microservices, Cloud solutions – mostly in the digital marketing domain. Currently, he’s working on two Edge computing solutions with special requirements regarding security and distribution of applications.


Why is working at Levi9 a big deal?

Worklife and homelife have blurred how we see our workplace. Office space is now sometimes with the cat in the attic. Yet still we managed to hold onto the World Class Workplace title in 2021. How? Because maybe it’s not about the work or the place. Surely, it’s not about being classy since many of us worked from home in sweatpants.

Recently, independent researcher Effectory awarded Levi9 with a 7.9 rating for employee satisfaction. That’s above the global average and we’re extra proud of the 9 ?. But what does that score mean? We asked our own people why it’s a big deal to work at Levi9. The answers were anonymous, sometimes surprising, often awesome and certainly refreshing.

9 reasons why it’s so great to work at Levi9

1. CONNECTIONS: “It doesn’t matter whether I work at the office or at home, I feel a connection with my team and Levi9 in particular. I adore my teammates and the atmosphere of the working environment.”

2. BUILDING SKILLS: “I greatly appreciate the entrepreneurial culture at Levi9, opportunities to experiment and explore new ideas. I feel that I have a lot of opportunities to apply my skills and develop new ones at Levi9.’’

3. EXPERIMENTAL SPACE: “I have the freedom to experiment with ideas that I think will bring value to the customer or the company.”

4. MINDSET: “I’m proud of the constant improvements and the growth mindset, opportunities for personal development, job dynamics, high quality of work.’’

5. EXPERTISE: ‘’Honest, supportive, great experts who push development forward.’’

6. KINDNESS & DIVERSITY: “I am proud of how many kind, intelligent, versatile and yet so different people work in our company.’’

7. KNOWLEDGE SHARING: ‘’It makes me proud that my colleagues are willing to share knowledge, to support juniors, to support others when needed and to have open communication within team.’’

8. MAKING A DIFFERENCE: “I can make a difference at Levi9 – both the company and the customers are open to my feedback and appreciate my ideas.”

9. GROWTH & IMPROVEMENT: “We are working with people that always aim to improve themselves.” “It’s the best learning ground on the planet!”

If any of these reasons weren’t awe inspiring enough, find 999 other awesome reasons why working at Levi9 has the WOW-factor: https://www.levi9.com/working-at-levi-9/