We Do Cool Projects

COOL PROJECTS

Our team has the skills to bring your ideas to life!

Let's work together to create something truly innovative and game-changing for your business!

Want to do a Cool Project? Reach out to Marc!

Name(Required)

We love to create, and bring measurable results to the business of our customers.

Feel inspired? Let us know!


Step-by-step: how to access bank data with an API

Step-by-step: how to access bank data with an API

Open banking is a financial revolution at our fingertips. For customers, it gives them unprecedented freedom and ownership of their bank information. At a macro level, it allows for a standardized interface between various banks and fintech providers. For us, developers, it’s a playground for innovation.

Before diving into this quite technical guide for accessing bank data via an API provider, you might want to read our previous article on the history of Open Banking and some use cases.

What follows is a step-by-step guide on how to access bank information using an API. You might want to use your own account, or you might need to do this for testing or development purposes. This demo uses my own bank account. Feel free to use yours when you follow the instructions.

Accessing information about somebody’s account and transactions is more complicated than simply calling an API. Even though banks do have APIs, banking data is strictly regulated, so third parties that wish to call the API directly need to be registered and certified with a national authority. Registering with the bank’s API and buying a certificate costs around 1000 euros every two years. It might not be worth it if you just want to experiment and play around to see what’s possible. However, there are some limited options available.

Here are the options I considered:

Tink is a robust open banking platform, but it did not offer access to Romanian banks, which was needed for the purpose of this demo;

Smart Fintech is a Romanian third-party bank data integrator, but the free trial covered only the Payments solution and not the Accounts information solution I was interested in;

Nordigen, a company from Latvia, has a generous free tier and offers access to all relevant information about the account via an API.

For the purpose of this demonstration, I will use Nordigen.


Step 1: Get access token from Nordigen

Create an account with Nordigen, which is free to use for limited purposes. Find the API documentation at this link to be able to follow along with the demonstration.

Get your user account secret keys and import them into Postman service collections. See a guide here. You can use the secret keys to generate a token, as demonstrated below.

Get access token from Nordigen

Copy the access token from the API response and add it to the environment variables.

Get access token from Nordigen

The access token is good for a few hours and can be renewed if necessary. With the access token, you can then use the other endpoints from Nordigen to access your bank information, allowing you to make full use of the API.


Step 2: Get a list of banks in your country

Before you can access banking data, you have to call an API endpoint, which gives you a list of banks in that country, and you need to choose which bank you want to access.

call an API endpoint

Aside from a list of bank entities, the API response includes information on how many days back you can access transaction records. For example, ING, which I will continue to use in this guide, offers transactions for two years. Some banks might decide to offer less historical data. You’ll also see a logo for the bank, so you can easily identify it.


Step 3: Create an end-user agreement

Using the bank ID from the previous response, you can choose to create an end-user agreement if desired. In Postman, make sure to list the scopes for which you want to get permission. In this example, account balance access, account details, and transactions are all used.

This will allow Nordigen (or any other third party you use) to access your banking information for the next three months. By default, access is granted for three months and includes historical data. You will see this agreement both at the bank site and within your implementation. If you wish to grant access for longer than three months, you must explicitly change the values.

Create an end-user agreement

You will need the generated ID value for the next request.


Step 4: Build a redirect link

The next step is to build a redirect link. Normally, this link should contain unique identifiers such as the chosen bank and a way to identify the client ID. When the authentication process is done, the person is sent to the link you put in the redirect field. If this were an app and not just you experimenting in Postman, this would work like your “landing screen.” And once the customer has returned to your app, the app would have access to the customer’s accounts.

For the purpose of the guide, go ahead and just type google.com.

Build a redirect link

Step 5: Sign the agreement

The API response from the previous call will contain a link. When you click it, it will take you to an agreement page. You will be asked whether you agree to share the various types of bank data information you mentioned in the Postman scope variables, and you will clearly see how far back in time the API can grab information.

Postman scope variables

If you use your own account while following this demo, you will be taken through your usual bank authentication process. If you have multiple accounts, you can allow access to all, one, or several of them.

API response

Following the agreement, you’ll be taken back to the redirect link you defined in Step 4, which in the case of this demo is simply Google. In a real-life scenario, a customer would normally be taken to the landing page of the app.


Step 6: Check whether the account was linked successfully

Just in case something went wrong in the previous step, you can check whether the service was successfully linked to the account through the Nordigen API. Go back to Postman and copy the requisition ID from the previous call. You can use it to call the requisition endpoint and check the status.

Nordigen API

If the operation is successful, the status will display LN, which stands for “linked.”


Step 7: Get information about the accounts

The requisition response from Step 5 will also contain the account ID to which the API was granted access. Using it, you can get more details about the account using the endpoint that exposes the details of that particular account.

API was granted access

As a response to this call, you will receive more information about the account, such as the IBAN number, when it was first created, and when it was last accessed.

Nordigen also has an endpoint for account details. This endpoint gives access to information about the account’s currency, the owner’s name, and the type of account.

access bank data with an API

You can also see the balance of the account using a dedicated endpoint.

dedicated endpoint

Finally, you can use the Nordigen API to access the transaction history for that particular account. You can set the date parameters so that the response only shows transactions that happened within a certain time frame.

how to access bank data with an API

Step 8: [Optional] Rescind agreement

End users have control over their bank data, which means they can change their minds at any time. This is why Nordigen also provides a way to remove access to the accounts. To do this, you have an API endpoint that exposes all requisitions. By calling it, you can find the ID of the requisition you want to be deleted and then use a dedicated endpoint to delete the requisition using that ID.

rescind agreement

After you cancel the agreement, the API won’t be able to access your bank information anymore.

If you are curious about which banks offer open access, you can use this tracker. You’ll be able to see whether the bank has set up a developer portal and what authority you need to register with to have access to that portal as a developer. As an alternative, you can use the solution from this demo and choose an API aggregator instead. You’ll be able to see a list of integrations for each bank, so you can easily choose a service that works with the ones you are interested in.

We’re supporting your race at your own pace. Choose yours!


Deploying a Spring Boot API to AWS with Serverless and Lambda SnapStart

Deploying a Spring Boot API to AWS with Serverless and Lambda SnapStart


In my last two blog posts I wrote about NestJS Mono-Lambda and how we dealt with cold starts. Cold starts are not that big of an issue when running NodeJS Lambdas, but with Java the numbers are unacceptable if you’re building synchronous, user-facing APIs. At re:Invent 2022 AWS announced SnapStart, a feature for Lambda using Java runtime. I always liked the simplicity of deployment to the cloud that Serverless Framework gives, combined with the developer experience of frameworks like NestJS and SpringBoot. After years of developing TypeScript applications on AWS Lambda, I wanted to go back to my first love, Spring Framework, to build an API and test SnapStart in the process.

This article will cover how to create an API that returns the closest Starbucks locations near the given point and radius. The API will be backed by MongoDB to utilize geospatial queries, with a Spring Boot app wrapped into a Mono-Lambda on top, deployed seamlessly to AWS with Serverless Framework. When it’s all up in the cloud and working, we will take a closer look at the performance and try to optimize it further by utilizing AWS Lambda SnapStart.

Pre-requisites:

Let’s get to it.


1. Setup MongoDB Atlas

I signed up for a free MongoDB Atlas account and launched a free cluster in AWS Frankfurt region. To add the test data to the database, I found a collection of US Starbucks locations and forked it to support GeoJSON objects for position. This was needed because I want to index the items in order to be able to perform efficient geospatial queries that Mongo offers. Next, I created a 2dsphere index on the position field. To test if index works, we can execute the following query on MongoDB:

It should return some results near Central Park. To read more about geospatial queries in MongoDB, feel free to refer to the docs. Now our data is ready, let’s proceed with creating our Spring Boot application that will query Starbucks locations.

Note: For the purpose of this exercise make sure that in the MongoDB Atlas Security section, Network Access IP Access List contains the 0.0.0.0/0 rule. That will make the database publicly accessible. None of this is a good idea to do in production, including usage of this free, shared-tier Mongo cluster. More secure way is the use VPC peering, however that is beyond the scope of this article, and it is also not included in the shared-tier MongoDB Atlas cluster. 💰


2. Spring Boot API

To quickly get up and running with development, let’s use Spring Initializr. After selecting Java 11, Maven, Spring Boot 2.7.8 project properties, and picking Spring Web and Spring Data MongoDB dependencies, we get a project that is ready for further development. Now, let’s create the usual three-layered Repository, Service, Controller setup. We’ll start by creating a StarbucksDocument that represents one MongoDB document:

Leveraging Spring Data MongoDB module, we can create a repository by just extending MongoRepository interface. We get the standard CRUD operations out of the box, so we just need to add a method for querying by position. This way we can fetch stores that are within a given distance from the passed in position, page by page.

import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.geo.Distance;
import org.springframework.data.mongodb.core.geo.GeoJsonPoint;
import org.springframework.data.mongodb.repository.MongoRepository;

public interface StarbucksRepository extends MongoRepository<StarbucksDocument, String> {
Page<StarbucksDocument> findByPositionNear(GeoJsonPoint position, Distance distance, Pageable pageable);
}

The repository layer is set up, and we will create a StarbucksService and a StarbucksController so that our API looks like this:

  • GET /starbucks/{id}

If you build and test this, it wouldn’t work of course, because of the missing database connection configuration in application.properties file:

spring.data.mongodb.uri=mongodb+srv://username:password@clusterxxx.xyz.mongodb.net/dbname?retryWrites=true&w=majority

Make sure to swap the values with your own, obtained from MongoDB Atlas.

3. 🚀 Run it, test it

curl 'http://localhost:8080/starbucks?lng=-73.9655834&lat=40.7825547&distance=1' | json_pp

{
“data”: [
{
“address”: “86th & Columbus_540 Columbus Avenue_New York, New York 10024_(212) 496-4139”,
“id”: “63d52640712420c4e81c9a20”,
“latitude”: 40.78646447,
“longitude”: -73.97215027,
“name”: “Starbucks – NY – New York [W] 06186”
},
{
“address”: “81st & Columbus_444 Columbus Avenue_New York, New York 10024”,
“id”: “63d52640712420c4e81c9a26”,
“latitude”: 40.78335323,
“longitude”: -73.97441845,
“name”: “Starbucks – NY – New York [W] 06192”
},
{
“address”: “87th & Lexington_120 EAST 87TH ST_New York, New York 10128”,
“id”: “63d52640712420c4e81c9a29”,
“latitude”: 40.78052553,
“longitude”: -73.95603158,
“name”: “Starbucks – NY – New York [W] 06195”
},
{
“address”: “Lexington & 85th_1261 Lexington Avenue_New York, New York 10026”,
“id”: “63d52640712420c4e81c9a41”,
“latitude”: 40.778801,
“longitude”: -73.956099,
“name”: “Starbucks – NY – New York [W] 06219”
}
],
“total”: 4
}

Apparently we have 4 coffee shops within 1 kilometre of Central Park. Nice.


4. How to deploy it to AWS?

A few more steps are needed for deploying our API to AWS with the Serverless Framework.

Maven dependency

Add the aws-serverless-java-container-springboot2 dependency to pom.xml:

Maven profiles

For convenience, create two Maven profiles — local and shaded-jar . This makes it easier to separate running the app locally and packaging the jar for AWS Lambda. The shaded-jar profile contains the spring-boot-starter-web dependency but without spring-boot-starter-tomcat. To package the app for AWS Lambda, the maven-shade-plugin is used to package all the dependencies into one big jar.

<profiles>
<profile>
<id>local</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>shaded-jar</id>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude>org.apache.tomcat.embed:*</exclude>
</excludes>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>

Lambda Handler

Implement RequestStreamHandler for running the app in Lambda. We leverage async initialization described in the aws-serverless-java-container docs.

public class StreamLambdaHandler implements RequestStreamHandler {
private static final SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;

static {
try {
handler = new SpringBootProxyHandlerBuilder<AwsProxyRequest>()
.defaultProxy()
.asyncInit()
.springBootApplication(StarbucksApiApplication.class)
.buildAndInitialize();
} catch (ContainerInitializationException e) {
// if we fail here, we re-throw the exception to force another cold start
e.printStackTrace();
throw new RuntimeException(“Could not initialize Spring Boot application”, e);
}
}

@Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
}
}

Serverless Framework

And the final touch, to configure our Serverless Framework deployment by creating serverless.yml file in the project root:


Notice the DBPW environment variable? I used AWS Systems Manager Parameter Store to securely store the database password. You should set the DBPW environment variable locally when building the project, if you have any tests that load the Spring’s Context. Feel free to experiment with the memorySize and timeout values.

Finally, build it and deploy it with:

Maven creates the shaded jar and then the Serverless Framework deploys it to AWS Lambda.

The first invocation takes a whopping 11 seconds, even though I’ve put 2 gigs of memory for the Lambda. Cold start seems pretty bad. I haven’t been able to get a response in less than 10 seconds. Warm invocations run in under 100ms on average.

5. SnapStart to the rescue

Let’s try to optimize this immediately by enabling SnapStart. This is done in serverless.yml by supplying the snapStart: true option to the api function and run serverless deploy.

What we can immediately notice is that the deployment takes about 2 minutes longer. That is because Lambda creates a snapshot of the function and saves it for future invocations. If we test it again, the improvements are huge — cold start time is between 1 and 1.5 seconds. Note that this is observed only on the function level, end-to-end latency is still above 2 seconds for the cold-start invocation. For some reason, API Gateway adds some more latency than usual.

A closer look at duration and pricing

To dive a bit deeper let’s take a look at CloudWatch logs for one particular invocation:

REPORT RequestId: c88a0706-70fa-455b-be3a-1138f981b7e7 Duration: 1124.43 ms Billed Duration: 1423 ms Memory Size: 2048 MB Max Memory Used: 206 MB Restore Duration: 392.10 ms Billed Restore Duration: 298 ms

Restore Duration includes the time it takes for Lambda to restore a snapshot, load the runtime (JVM) and run any afterRestore hooks. You are not billed for the time it takes to restore the snapshot. However, the time it takes for the runtime (JVM) to load, execute any afterRestore hook, and signal readiness to serve invocations is counted towards the billed Lambda function duration.

SourceStarting up faster with AWS Lambda SnapStart

There are a few important numbers here, and the first one is obviously Duration. That is how long it took to execute the function. Billed Duration is also familiar, but as you can see it is slightly higher than the actual duration. How come? To answer that, we have to look at the two newly added fields — Restore Duration and Billed Restore Duration. The first one is how much it took for Lambda to restore the snapshot and the second one is how much of that time we are charged for. Why are we charged less?

Further optimization — Runtime Hooks and Priming

Turning SnapStart on improved the cold start significantly, we can safely say that for a lot of use cases with some real production load this would be good enough. But ever since it was introduced, there’s been talk about Priming. Runtime hooks allow you to execute some code just before the snapshot is taken, as well as right after the snapshot is restored by Lambda. I encourage you to read a more detailed explanation about it on AWS blog.

In this case, priming comes down to trying to make dummy calls to your code, so it gets compiled and saved with the snapshot, with the goal to speed up the next cold start even more. I tried a few things, and I’ve got mixed results.

Database connections

The first thing I did was just made a request from the beforeCheckpoint hook to one of the endpoints that fetches some data from the database. After deploying to AWS and testing, my cold-start invocation just timed out. I quickly realized that the socket timeout configuration on MongoDB is set to infinity by default, and because the connection to MongoDB was established in the snapshot process, my function was just hanging until the timeout. I changed the socketTimeoutMS setting to something like 500ms and that seemed to work, as the application quickly “realized” that it needs to create a fresh connection to the database. However, it seems that it did add these 500ms to the cold start time, beating the purpose of priming in the first place.

Regardless, default config values probably shouldn’t be used in most production cases anyway.

Priming with dummy calls instead

Not wanting to mess with database connections, I decided to create a dummy request and return a response even before the code tries to fetch something from Mongo. I immediately didn’t like this, as it requires some caveats in parts of the application code. However, after first tests it did seem to reduce the cold start by a few hundred milliseconds.

Let’s take a quick look at the results of testing the same API with and without priming. The load was 1000 requests with the concurrency of 10.

Percentage of the requests served within a certain time (ms) — no priming:

Percentage of the requests served within a certain time (ms) — with priming:

6. Conclusions

This blog post showed how easy it can be to build a MongoDB backed Spring Boot Mono-Lambda API, with the benefits of effortless deployment to the cloud with Serverless Framework. While the example is far from being production grade, I think it’s still an interesting experiment. There’s a lot of room to further explore this setup, for example:

  • try MongoDB Atlas Serverless

7. Useful links

On top of all the links throughout the post, I highly recommend checking out the following as well. These offer additional perspectives and dive deeper into the topic:

Thanks for reading and feel free to reach out to discuss any of the related topics.


The Open Banking Revolution

The Open Banking Revolution


In 2007, Valentin Dominte was in high school, and he certainly did not follow news about how bureaucrats in the European Union were voting. Unbeknown to him, a quiet financial revolution started in Brussels that year, one that would later be significant for his software developer career, for fintechs all over the continent, and for every EU citizen’s money: open banking.

In simple terms, open banking is a way for people to take back control of their financial information. Like Valentin, you might have two or three bank accounts, each with its own app and its particularities for making transfers, checking the balance, or granting a loan. If you wanted to have an accurate overview of your finances, you’d need to log in to each of those bank apps, extract the information and do all the calculations yourself. Open banking breaks down the walls between all of these different apps, making it possible for apps to pull information from the accounts you choose and give you real-time information about your finances that is gathered from all of them.

Technology is at the heart of open banking. At Levi9, Valentin Dominte is one of our most experienced developers working in open banking since 2018, and we’ve asked him to give us his insights into this topic.

“The official definition of open banking is the process of enabling third-party payment service and financial service providers to access consumer banking information such as transactions and payment history through APIs”, says Valentin. Some key expressions he highlights are “third-party payment,” “consumer banking information,” and “APIs”.

consumer banking information

A third party is a service that aggregates that data“, explains Valentin. “It can be an application from one of your banks, but it can also be completely independent, and you can have different third-party providers for different use cases.” The main benefit for the consumer is that they can get information in a way that is easier to use.

Some of the consumer banking information that can be accessed through open banking includes the account holder’s name, the account type (current, savings, etc.), and information about transactions (amounts, merchants, etc.).

APIs are at the heart of open banking, serving as a bridge between multiple financial services. Through APIs, different systems can talk to each other in a standardized way, meaning that developers can use them to build new features or services on top of existing systems. One important feature of APIs, especially in open banking, is that information is shared in a standardized and secure manner.


2. The EU regulations on open banking

Perhaps Valetin did not pay attention to EU open banking regulations in high school, but looking back he says that the concept of open banking in Europe is tightly linked with those regulations known as PSD (Payments Services Directives).

The EU regulations on open banking

The first PSD was released in 2007, with the EU Commission seeking to stimulate competition in the financial industry, enhance the quality of services provided, and protect the end user.

A second version of the PSD was released in 2015, introducing the concept of consumer protection against bank or third-party providers. “The focus now was on the end-user experience and privacy.” Two main concepts were enforced by this PSD2 directive: the first — strong customer authentication. “Basically that means as a bank you shouldn’t allow people to connect to your API without multi-factor authentication, let’s say. And the end user should have the same way of authenticating directly to the bank or through a third party. There should be no difference.” A second concept was related to the fact that third parties should connect to banks in a standardized manner. Third parties are also obliged to register with an authority, adding another level of security.

Valentin says he is now keeping an eye on discussions related to a third directive. While following EU legislation might not be typical everyday work for a developer, Valentin builds a strong case for remaining one step ahead and analyzing the impact of legislation on technology.

EU FinTech cheatsheet

3. How screen scraping became obsolete

To prove this point, Valentin reminisces about one of his first projects in open banking. Before APIs became the norm and before strict European regulations, developers were still looking for ways to let users access their financial data in a more friendly manner. “Because developers are creative and can find workarounds, there is an alternative to APIs: screen scraping”.

Screen scraping imitates what a person does on a portal, doing everything automatically that a person can do by hand. “It meant impersonating the client in the bank portal to extract data or perform action.” Screen scraping solves the issue of missing APIs, but it introduces several other problems”.

“With screen scraping, the third-party provider controls how the consumer’s credentials are stored and secured,” warns Valentin. Moreover, the clients don’t get to choose what information they share but rather have to give full access to the third-party provider. On top of that, screen scraping cannot get around multifactor authentication and could trigger a possible violation of terms and conditions. Developers avoid screen scrapers not just because of security concerns but also because “this kind of integration is quite fragile.” What if the UI of the internet banking system changes for some banks? The third-party has to adjust to those changes each time.

Coupled with EU rules, the technical setbacks were the main reason that screen scraping became an obsolete practice.


4. How open banking breeds innovation

Open banking is a breeding ground for new ideas, and it encourages innovation by chipping away at large bank monopolies. “Third parties can provide a better user experience and steal the show, which should result in lower costs and, hopefully, a better experience for the end user,” says Valentin Dominte.

Saving time for customers

One way that open banking is different is by making it easier for customers to get loans. “For one of Levi9’s customers, we developed a system that saved the bank and its clients a significant amount of time. When applying for credit, clients had two options: one was to manually upload proof of their financial situation, such as salary slips, bank statements, rent agreements, or mortgage contracts. The second one was to log into the bank account, and choose which transactions represent income or housing costs.”

One immediate result was an improved customer experience. “The customers didn’t need to look for salary slips or dig around for their mortgage contract.” At first, about 40% of customers were unsure about sharing their information automatically with the bank. However, over the course of three years, the number of customers using the faster way to log in to the bank increased by a factor of ten.

Instant credit limit

In a second open banking Levi9 project, Valentin and his team replaced cumbersome manual steps and questionnaire filling with instant credit limit calculation. “We had the old system and the new, automated system run side by side. When clients applied for credit, they were randomly assigned to one of the two systems. Some were going the old road of filling out a questionnaire, providing proof of income and expenses, and getting their answers manually assessed by a bank employee. But other customers had a much more straightforward experience, thanks to the Levi9 project: they simply logged into their bank account, their transactions were automatically analyzed, and they were able to receive their credit limit on the spot.

open banking

With standardized communication between services through APIs and clear regulations, open banking is the perfect playground for technological innovation.

We’re supporting your race at your own pace. Choose yours!


'A partnership in anticipation of rapid growth’

Vision and ambition are becoming a reality

Future Insight envisages a great future for the digitalisation of processes and activities within the public domain. Their platform helps people to make the right choices in projects involving the living environment, such as infrastructure, development and the repurposing of sites and buildings. In partnership with Levi9, vision and ambition are becoming a reality. ‘The partnership enables us to think big and also put our ideas into action,’ says CTO and co-founder Rick Klooster.

The company provides three separately developed software solutions designed to optimise collaboration and shape successful projects: Clearly.Projects for construction projects; Clearly.BIM for optimum viewing and interrogation of BIM models; and Clearly.3D-City for accessing 3D-city models (Digital Twins). All three involve software that brings together and makes available data from different sources.

‘This means that the parties involved in construction projects, works on physical infrastructure, or the life-cycle of a building can access the right up-to-date data and tools to enable smart collaboration,’ explains Klooster. ‘We have built our reputation on this. In 2022, Clearly.BIM secured us the Building Smart Award in Estonia, based on the implementation of a BIM-based planning permission process.’

Open Urban Platform

With the assistance of Levi9 Technology Services, these solutions are being further developed and increasingly integrated. ‘Together, we’re building a growing ecosystem of all kinds of services,’ says the CTO. ‘This Open Urban Platform enables us to easily access specific solutions from third parties. Some of these relate to AI. As a result, we no longer need to develop everything ourselves, and other parties can serve part of their market using our platform.’

The Open Urban Platform is part of the Future Insights Clearly.Suite. It represents the combined vision and ambitions of CTO Rick Klooster and CCO Bas Hoorn. ‘It’s a dream that’s becoming a reality,’ says Hoorn, originally an engineer. For his part, Klooster has a lot of experience working with government. Future Insight has now become a genuine knowledge company with highly qualified specialists.

Expansion

In anticipation of its rapid growth, Future Insight decided to expand its own development team with external knowledge and experience in 2021. Klooster: ‘The key aim is to achieve a stable and rapidly scalable platform. In view of the intended growth, we also aim to organise ourselves and our teams more effectively and professionally. That’s why we made the decision to have a partner develop our technology.’ Following a thorough search, Klooster and Hoorn came into contact with Levi9.

‘They offered the competencies and quality we were looking for in those key areas. The partnership not only enables us to think big, but also to put those ideas into practice,’ says the CTO.

It has been a genuine partnership from the outset. ‘We have a similar culture and way of working,’ explains Albert Klingenberg, account manager at Levi9. ‘Future Insight are completely open with us about their vision and strategy. That’s reciprocated. This openness gives us the opportunity to keep each other on our toes and make real progress. Our people also really love working on the Future Insight platform.’

Important motivator

The founders are convinced that there are genuine global opportunities when it comes to sharing information in urban areas. For example, there are as yet no widely embraced worldwide standards when it comes to smart cities, buildings, areas, and infrastructures. With the help of Levi9, Future Insight is determined to and capable of playing a crucial role in this development. An important motivator in this is the desire to address this global social challenge.

It is also helped by the fact that government authorities across the world are increasingly open to the cloud. CTO Rick Klooster: ‘That will potentially result in an increasingly wider application of international standards in the future. It will then become increasingly easier even for smaller municipalities to engage and collaborate in an integrated way without any form of vendor lock-in.’

Future Insight has the wind in its sails. The company has doubled its staff numbers in the last six months alone. ‘Things are really moving, and we’re working on all kinds of things simultaneously and clearly proving remarkably successful. The people working for us based in Serbia are actually our development department. That’s where our ideas really take shape.’

Source: itexecutive.nl

Rick Klooster, CTO Future Insight


Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Albert!

Name(Required)

AWS re:Invent — Niners share their experience

1. Levi9: a proud AWS partner

Back in 2016, a few enthusiastic people from Levi9 decided to start our journey towards AWS partnership. With no certified people at that moment and practically just a bit of experience with AWS it was an unexpected journey. Moving forward to 2022, more than 120 certified people2 AWS competences, 1 partnership program and soon to become Premier Tier partners. We almost succeeded in all, however our ambition is to achieve even more.

2. AWS re:Invent — keynote speeches recap

3. Learning from AWS partners @AWS re:Invent

AWS gives a lot of attention to their partners and during the conference we could hear many great stories and achievements from many companies. Even more, there was a keynote dedicated to partners where many partners had a chance to present their solutions. Most impressive was financial company from Brazil with 10 million accesses per day, 2500 microservices, 1 million API calls per minute which concentrates 10% of all payments in Brazil. They managed to scale from 100.000 customers to more than 20 million customers in just five years! Their example taught us that re-engineering of complete platform and moving it to the “elastic” cloud environment gives a great opportunity for large scaleups. AWS Certified people were also recognized during the whole conference. We’ve managed to meet many inspiring people, some of them having all 12 AWS certificates, which is definitely a great success.

4. AWS re:Invent takeaway — Levi9 is on the right track

By meeting some of the great attendees as well as AWS employees, we’ve realized that we as Levi9 are doing great things and moving in the right direction. It was an amazing experience to compare us to some of the biggest AWS customers and partners. Even though we aren’t the biggest partners compared to all the giants that were there, nor the biggest customer of AWS, our strategy and our goals are leading us into a bright future. So, who knows, with enthusiastic Levi9 people and great energy, we might become one of those giants in the future. 😊

After all, re:Invent is a great place to be. With all the sessions that you can learn from, opportunities to meet experts from all over the world, it is also a nice place to have a bit of fun as well.


‘A partnership that brings out the best in everyone’

°neo - the new ‘banking grade’ SaaS platform

For the last decade, five°degrees has been supplying leading financial institutions with a tried-and-trusted core banking product. Last year saw the launch of a completely cloud-native version that will enable customers to continue to meet ever-changing market requirements even in the long term. The partnership with Levi9 proved to be instrumental in the development of the new platform. ‘It was exactly the type of synergy we needed,’ confirm CEO Martijn Hohmann and CTO Jeffrey Severijn.

Instead of modernising the existing solution, the decision was made to redesign the new one from scratch. It is now ready for the market. ‘Unlike the old stack, the new ‘banking grade’ SaaS platform – known as °neo – is component-based,’ explains CEO Martijn Hohmann. ‘Over time, the number of building blocks will gradually increase. It will also become easier to link external services and ecosystems together.’

Five°degrees and Levi9 Technology Services have been working together for seven years, but the interaction has entered a new phase in the last four years. ‘At management and shareholder level, we’ve been considering how to get the most value out of our relationship for some time,’ says the CEO.

‘Levi9 is really determined to create value on the business side for customers, partly because that’s also interesting for their own employees.’ – Hohmann

Added value

‘Delivering value for business is the holy grail for developers,’ says CTO Jeffrey Severijn, who also has ultimate responsibility for the °neo platform. ‘Being able to offer employees a challenging working environment is important for Levi9. As far as our traditionally designed °matrix solution was concerned, our relationship was gradually entering the danger zone. Developers were just losing their enthusiasm for it.’

According to the CTO, there was also another factor at play, ‘Outsourcing partners are facing significant increases in salaries in their international branches caused by the COVID-19 pandemic. That eats into the price advantage. Companies like these now need to distinguish themselves in different ways. For example, through specific market or domain knowledge that we can make use of.’

There is added value for five°degrees if a player like Levi9, with six delivery centres in Eastern Europe, has experience with the Azure cloud, for example within a media company. ‘They can then discuss applying those competencies for other customers, which has advantages for everyone: partner, staff, and customers. It even opens up the potential of wider co-creation in the future.’

It was during a frank discussion with Levi9 about all of this that things started to move forward.

‘We had talks with various different players about modernising our existing core banking product, but we achieved very little. However, Levi9 came from a different direction and suggested adopting a totally different approach using the latest methods and techniques. That brought a fundamental change in our partnership – and one that added value for all of us.’ – Hohmann

Start-up strategy

Around 80 people – half of the total capacity – worked on °neo over a three-year period. The completely new platform features lower variable costs, offers much more flexibility and has an important role to play in the company’s global ambitions. Severijn: ‘Together, we approached it as if it was a start-up: from the initial design to a minimum viable product before going on to develop a technical MVP. All of it agile and focussing on the potential for rapid upscaling. Ultimately, it all turned out well.’

The partnership is based around the shared objectives that the two companies aim to achieve as a team. ‘It really is a joint initiative,’ say the CTO and CEO. ‘Our people sometimes struggled with that. It was no longer a case of “we ask, and they do what we say”, as you would approach it with development teams in India or Vietnam, for example. Everything is based on a relationship of equality.’

According to the management at five°degrees, this has brought out the very best in everyone, starting from a blank piece of paper. Severijn: ‘The contribution made by Levi9 was primarily technical: the Azure cloud, event-driven architecture, microservices, and so on. Our main focus was on the functional aspects. We then came up with a lot of great ideas together.’

Speeding up the process

Examples of this include work relating to Logic Apps for integrating apps, data, services, and systems using automated workflows in Azure. ‘During a demo with the key engineers from Microsoft, it turned out that we were further advanced than they were on some points,’ says Hohmann enthusiastically. ‘It was so good in fact that they even adopted some of our ideas.’

As for the partnership, the CEO is keen to stress how productive and enjoyable it is. ‘If people from Levi9 have ideas about how things can be improved, we’re always keen to listen. In return, we gave a presentation about the future of banking to their development team in Serbia.’

Jeffrey Severijn: ‘During the demos, members of the Levi9 team also made regular contributions. That helps create ownership and mutual understanding. Even when there are challenges, if something takes slightly more time, or if the requirements are unclear. The fact that we already knew each other allowed us to speed up the process that little bit. Ultimately, developing a new relationship always takes time – but we understand each other’s strengths and weaknesses.’

‘All in all, it was exactly the synergy we needed to make a success of the °neo project,’ concludes CEO Martijn Hohmann. ‘I’d even go so far to say that it would have been impossible without it.’

Martijn Hohmann, CEO five°degrees


Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Wesley!

Name(Required)

'From concept to American market in record time – by working together'

Incision Assist

Incision has become a worldwide success with its video-based education and training for operation room personnel. Now, alongside its trusted e-learning environment, Incision has launched a new digital assistant, developed in partnership with Levi9. This tool went from idea to market in record time. ‘We started in June 2021, and four months later, it was live,’ says Raimo van der Klein, Chief Product & Technology Officer. “And a year later, we were already running pilots in American hospitals.’

The mobile app is a strategically important expansion of the Amsterdam scale-up’s existing e-learning platform. ‘Now we can offer users direct support before and during the surgery,’ says Van der Klein, who stepped into his role as CP&TO two-and-a-half years ago. ‘This new digital product is crucial for the relationship with the end user.’

Incision Academy has been around for a while, offering e-learning accredited by international medical associations to train medical professionals in operation room-related activities. Now, the new Incision Assist tool gives those professionals direct access in the operation room to all relevant information: instructions, manuals, requirements, and the personal preferences of the performing surgeons – and all this for a wide range of procedures.

Co-creation

Incision Assist was developed in record time in a process of co-creation with Levi9: from concept to worldwide market launch in under one year. ‘The challenge was that there was relatively little we could reuse from our existing e-learning environment. And on top of that, with Assist, we’ll become part of the digital infrastructure of the hospital, so part of a strongly regulated IT environment. That requires enterprise-grade software.’

'We can help the people in the operation room with any procedure'

Incision was founded in 2014 by Dr Theo Wiggers and a group of investors. It is now a cutting-edge collective of driven doctors, software developers, marketers, and professionals working to grow the business from its home base in Amsterdam. ‘We believe in sharing surgery-related knowledge and skills and making them available to everyone,’ says Van der Klein.

The newly developed app will be a boon to students, personnel in training, and temporary staff in particular. ‘With Incision Assist, we can help operation room staff with any process and any procedure, in a uniform way. Its advantages are better preparation, reducing risks, improving team functioning and, ultimately, better medical outcomes. In combination with Incision Academy, this gives us a rock-solid proposition.’

Increasing impact

About the co-development process with Levi9, Van der Klein says, ‘They had also been involved in the development of the existing e-learning platform. From the moment that we decided to increase our impact in the digital domain, we worked out the best approach in a very effective dialogue. The goal was to, in a relatively short amount of time, put out a product that was at least viable, and that we could then scale up to the highest quality requirements fast and efficiently.’

Incision and Levi9 as partners went through a hyper-fast learning phase in a working process in which they used feedback from customers and users to add more and more features to the app. ‘At this point, we’re going forward with the expansion and upscaling of the platform based on modular technology.’

'Without Levi9, we'd never have been able to market a product this good, this fast'

Van der Klein calls the joint development process a transformation within a scale-up. ‘And that in an industry that’s super-complex, that’s struggling to fill jobs right now, that’s heavily regulated, and that has extremely high standards for security. Because of these challenging and above all complex dynamics, we knew we needed a technology partner that could give us the right people, quality, and working methods. Without Levi9, we’d never have been able to do this as well or as fast.’

Raimo van der Klein, CPTO Incision


Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Roy!

Name(Required)

Make your job harder and 10 other ways to adopt a total ownership mindset

Make your job harder and 10 other ways to adopt a total ownership mindset

When Codrin Băleanu was a junior software developer he used to print out his code on paper. He would select a particular intricate piece of code, send it to the printer, take the papers with him, and read them quietly. He would read until he saw the workflow in front of his eyes, until he could visualize the data flowing as smooth as rivers.

Now an Engineering Lead at Levi9, Codrin describes himself as simply a person who gets paid to do what he likes. And he credits most of his career advancement to that attitude that made him read code on paper until he understood it completely: Total Ownership Mindset.


Own the project

“Total ownership” is a concept that gets thrown around a lot during agile meetings. It might sound a bit intimidating, as it sounds like people are expected to do much more than their fair share and to place work, the customer, or the project above everything else, including personal life. But Codrin says the concept is completely misunderstood.

I think of it like a car I just bought”, says Codrin. “It is mine, I take care of it. I treat it with care, I don’t want it to get scratched, I don’t want to smash it into walls.” A car owner might seek to always improve his car, buying accessories, equipping it with new gadgets, and finding ways to make it run better. “In the same manner, if I own my work — be it a customer, a product, a task — I take care of it. I want it to work better, faster, and to be more interesting.

In other words, total ownership does not mean that your work never stops, but rather that you treat it as if it’s your job to make it better.

Here are 10 pieces of advice from Codrin about how to approach and boost your ownership mindset.


1. The customer business is your business

The first rule of the ownership mindset is to understand the business of your customer and understand how that business creates money, part of which will end up in your pocket. If your customer has an issue, you’ll be able to move mountains and do anything that needs to be done to solve that problem. You might end up solving problems that are not part of your expertise or technology, but that will help you grow. This is the root of total ownership.

Part of owning a project means understanding that you and the customer are fighting for the same goal. “Listen to the customer when he talks about business”, advises Codrin. “Your mind might be tempted to wonder, but if you understand the business, you’ll be able to open conversations, reframe your proposals from a business point of view and get your point of view across. “

The customer business is your business

Own your code

When you hear “be the owner of the code” you might be tempted to think “of course I am, my code is my baby”. But that’s the opposite of what it means! If your code is your “child” and you get defensive about it being cut, changed, transformed, you are harming the product and business. Ownership means always looking for ways to make it better, at the cost of your own ego sometimes.


2. Pick up the trash

When you walk on the street and see a piece of trash, you will probably take it and throw it in the bin. You can do the same in a project: if there’s a part of work that nobody wants to touch — a procedure, a database — own it. Make it your goal to fix it, repair it, solve it.

Refactoring is part of the same mentality of picking up the trash. For example, if you have a 2000-line Javascript program, don’t be the one that adds another 100. Refactor. Clean up after yourself, don’t postpone this for a future date, because you’ll never get to it.

Refactoring might not be part of the job description or existing inside a story point, so you have to convince the customer that the process is essential. However, try and explain it not from your point of view (“this code is messy”), but from the point of view of the customer. Focus on the value that refactoring will bring, such as: the code working faster, it’s easier to extend or it’s easier to maintain and repair if there are any bugs found. No product manager or architect or customer would refuse the cost. The condition is to bring value.

“Here is my rule”, clarifies Codrin. “If I repeat a line of code a second time, I consider a “yellow” warning. If I have to repeat the same line of code for a third time, I stop and I refactor. I never broke that rule.”

Pick up the trash

3. Wreck something

Once you have an ownership mentality, you will understand accountability. Once more, the concept of accountability sounds scary, because it’s often associated with blaming. Codrin Băleanu sees this differently: “accountability is seeing the bigger picture and asking yourself: Is there something that could be broken if I change this one line of code? Don’t be afraid of failure. Unless you experience it, you’ll never be a good engineer. Wreck something.

After a bit, this attitude gives you more time to innovate, learn or research. And this — as you’ll come to see — is the only way forward.

Wreck something

Own your time

One advice of Codrin for those who want to adopt a total ownership mindset might be summarized as “Don’t be the British Empire!” Sounds easy enough, right? But here is what it means.

“When the British Empire was at its peak, one of the reasons for its success was its ability to take people who were completely unprepared, place them in a factory and have them produce luxury goods, without any training. They had reduced manufacturing to such a degree, that any person was expandable, a cog in the mechanism.” While an admirer of British Empire history, Codrin warns that “If you repeat everything ad infinitum and do everything the same for years and years, you become expendable. The industry will disappear.”

A developer will never feel motivated and engaged in a British-Empire-like process. Simply repeating other bits of code does not leave you content. Being a developer means having the space to be creative and innovative and that also means pushing against being busy all the time. Developers are creative beings.


4. If you are 100% busy, you have no time to think

“If at this moment, you already know what you’ll do in the next 3 or 6 months, then that’s a problem. This is Agile done badly”, says Codrin. Cramming the schedule with tight-fit plans leaves no space for innovation. You cannot bring anything creative into something that has been planned for the next 6 months.

When we are blocked by work, we don’t have time to think. Always push against this. And you do this by continuously improving processes, so they are better and allow you time.”

If you are 100% busy, you have no time to think

5. Innovate. Innovate. Innovate

Monoliths will always fall, just like all the previous monoliths fell when Netflix appeared. In old companies, processes are what they are, people are working and business is going just fine. But all the while, someone from outside is looking at those processes, analyzing them, and seeing spots that can be done better. This is why process innovation is key to staying relevant.

Innovate. Innovate. Innovate

6. Change your way to work

Things tend to quickly get into a routine, but routine is the death of innovation and creativity. You always need to change something — sometimes as simple as changing your way to work. Another example is how you approach a story point, change a technology or change your entire playing field. In time, this will help you to not be scared by anything new, because change will be ingrained in you. You will stay relevant to the market.

Change your way to work

7. Be lazy

One of Codrin’s favorite advice to young developers is to “be lazy”. By that, he means to be very critical with the time they designate for writing code. “Sitting in front of a computer for 8 hours does not make you a software developer.” You need to always have the mindset of “what else can I do? ”. Or, on the contrary, the work might be boring —then find a way to make it interesting. “For example, if you just type data, write a script that automates the process. Make the machine work for you. Be lazy.

Be lazy

Own your progress

As a junior, Codrin used to look for the hardest, most scary thing to do. “I was scared by many things: Linux, databases, VIM editors, the cloud. As I felt overwhelmed by the new, the only solution to that was to learn. I would use a book, a tutorial, or a video.”

If your job is too easy, make it harder.” This attitude sums up Codrin’s approach to how the ownership mindset, together with continuous innovation ties up to professional progress.


8. Identify the people from your future

In the end, this ownership attitude is something that benefits not just the customer and the company, but the developer himself. “Recognized seniority comes with hard work and involvement. Seniority is something you gain. It is not given to you”, he says.

As a practical advice on how to push yourself on the road to a higher rank, Codrin says to look for the people coming from the future: your future. Identify the people you want to be like 5-10 years from now, as they represent your future. Learn from them to increase your chances to end up like them.

Identify the people from your future

9. Own your job to own your purpose

The road to seniority is peppered with “why”. “Why is this needed? Why does the customer want this? Why do we do things a certain way?” Having the answers to why gives you not only ownership, but a sense of professional purpose. When projects are too complex and opaque to understand, Codrin encourages team members to look for the right person to ask questions, until they gain a deeper understanding of its purpose. “If you don’t understand why you do something, and what is the purpose to which you are contributing, you’ll never like what you do.”

Own your job to own your purpose

10. In a changing world, your mindset is the constant

IT is an industry of permanent changes. One company rises, and three others fall. As systems get more and more complex, ownership gets distributed and it gets more and more difficult to understand who is in charge of what. The only way to navigate the continuously transforming landscape is to have this one constant mindset: total ownership.

In a changing world, your mindset is the constant

We’re supporting your race at your own pace. Choose yours!


Data Lake as an answer — The evolution, standards and future driving force

Data Lake as an answer — The evolution, standards and future driving force

Aleksander Bircakovic, Data Teach Lead @ Levi9


Designing a Data Lake: cloud or on-prem system?

Efficiency and scalability

Assessment of the current needs and prediction of potential growth can be a challenging task. When talking about on-prem system, it is necessary to assess the current needs as well as the potential growth in the upcoming period in order to put together a business justification for securing the funds.
On the other hand, Cloud Platforms usually charge for services based on used or reserved processing power and used storage, and with this billing model, they enable quick start of the journey towards an MVP solutions. As the complexity of requirements increases as well as the amount of data, the Cloud platform system can be easily scaled up. Storing data in the form of blobs is usually very cheap and practically unlimited. Database servers can be scaled as needed with the allocation of stronger instances, while processing power in the form of code packaged in containers or distributed systems that are terminated after the work is done is charged according to the used processing power and other resources. Tools like AWS Glue, Google DataFlow, AWS Cloud Functions etc. are just some of the options that offer those capabilities.

Data Catalog and service integration

Reliability, maintenance, and security

  • free disk space,
  • processor and memory allocation,
  • sharing of hardware resources with other applications (shared & noisy hardware) and users,
  • failure of one node in the cluster and redistribution of the topics to another or,
  • in a slightly more extreme case, the termination of the master node.

Cost optimization

AWS, GCP or Azure? Similar concepts, different skin

Data lake and lake-house

Data lake or data mesh? Technological or organizational dilemma?

Data lake layers

Solution as a service — Databricks

Conclusion