The Open Banking Revolution
In 2007, Valentin Dominte was in high school, and he certainly did not follow news about how bureaucrats in the European Union were voting. Unbeknown to him, a quiet financial revolution started in Brussels that year, one that would later be significant for his software developer career, for fintechs all over the continent, and for every EU citizen’s money: open banking.
In simple terms, open banking is a way for people to take back control of their financial information. Like Valentin, you might have two or three bank accounts, each with its own app and its particularities for making transfers, checking the balance, or granting a loan. If you wanted to have an accurate overview of your finances, you’d need to log in to each of those bank apps, extract the information and do all the calculations yourself. Open banking breaks down the walls between all of these different apps, making it possible for apps to pull information from the accounts you choose and give you real-time information about your finances that is gathered from all of them.
Technology is at the heart of open banking. At Levi9, Valentin Dominte is one of our most experienced developers working in open banking since 2018, and we’ve asked him to give us his insights into this topic.
“The official definition of open banking is the process of enabling third-party payment service and financial service providers to access consumer banking information such as transactions and payment history through APIs”, says Valentin. Some key expressions he highlights are “third-party payment,” “consumer banking information,” and “APIs”.
“A third party is a service that aggregates that data“, explains Valentin. “It can be an application from one of your banks, but it can also be completely independent, and you can have different third-party providers for different use cases.” The main benefit for the consumer is that they can get information in a way that is easier to use.
Some of the consumer banking information that can be accessed through open banking includes the account holder’s name, the account type (current, savings, etc.), and information about transactions (amounts, merchants, etc.).
APIs are at the heart of open banking, serving as a bridge between multiple financial services. Through APIs, different systems can talk to each other in a standardized way, meaning that developers can use them to build new features or services on top of existing systems. One important feature of APIs, especially in open banking, is that information is shared in a standardized and secure manner.
2. The EU regulations on open banking
Perhaps Valetin did not pay attention to EU open banking regulations in high school, but looking back he says that the concept of open banking in Europe is tightly linked with those regulations known as PSD (Payments Services Directives).
The first PSD was released in 2007, with the EU Commission seeking to stimulate competition in the financial industry, enhance the quality of services provided, and protect the end user.
A second version of the PSD was released in 2015, introducing the concept of consumer protection against bank or third-party providers. “The focus now was on the end-user experience and privacy.” Two main concepts were enforced by this PSD2 directive: the first — strong customer authentication. “Basically that means as a bank you shouldn’t allow people to connect to your API without multi-factor authentication, let’s say. And the end user should have the same way of authenticating directly to the bank or through a third party. There should be no difference.” A second concept was related to the fact that third parties should connect to banks in a standardized manner. Third parties are also obliged to register with an authority, adding another level of security.
Valentin says he is now keeping an eye on discussions related to a third directive. While following EU legislation might not be typical everyday work for a developer, Valentin builds a strong case for remaining one step ahead and analyzing the impact of legislation on technology.
3. How screen scraping became obsolete
To prove this point, Valentin reminisces about one of his first projects in open banking. Before APIs became the norm and before strict European regulations, developers were still looking for ways to let users access their financial data in a more friendly manner. “Because developers are creative and can find workarounds, there is an alternative to APIs: screen scraping”.
Screen scraping imitates what a person does on a portal, doing everything automatically that a person can do by hand. “It meant impersonating the client in the bank portal to extract data or perform action.” Screen scraping solves the issue of missing APIs, but it introduces several other problems”.
“With screen scraping, the third-party provider controls how the consumer’s credentials are stored and secured,” warns Valentin. Moreover, the clients don’t get to choose what information they share but rather have to give full access to the third-party provider. On top of that, screen scraping cannot get around multifactor authentication and could trigger a possible violation of terms and conditions. Developers avoid screen scrapers not just because of security concerns but also because “this kind of integration is quite fragile.” What if the UI of the internet banking system changes for some banks? The third-party has to adjust to those changes each time.
Coupled with EU rules, the technical setbacks were the main reason that screen scraping became an obsolete practice.
4. How open banking breeds innovation
Open banking is a breeding ground for new ideas, and it encourages innovation by chipping away at large bank monopolies. “Third parties can provide a better user experience and steal the show, which should result in lower costs and, hopefully, a better experience for the end user,” says Valentin Dominte.
Saving time for customers
One way that open banking is different is by making it easier for customers to get loans. “For one of Levi9’s customers, we developed a system that saved the bank and its clients a significant amount of time. When applying for credit, clients had two options: one was to manually upload proof of their financial situation, such as salary slips, bank statements, rent agreements, or mortgage contracts. The second one was to log into the bank account, and choose which transactions represent income or housing costs.”
One immediate result was an improved customer experience. “The customers didn’t need to look for salary slips or dig around for their mortgage contract.” At first, about 40% of customers were unsure about sharing their information automatically with the bank. However, over the course of three years, the number of customers using the faster way to log in to the bank increased by a factor of ten.
Instant credit limit
In a second open banking Levi9 project, Valentin and his team replaced cumbersome manual steps and questionnaire filling with instant credit limit calculation. “We had the old system and the new, automated system run side by side. When clients applied for credit, they were randomly assigned to one of the two systems. Some were going the old road of filling out a questionnaire, providing proof of income and expenses, and getting their answers manually assessed by a bank employee. But other customers had a much more straightforward experience, thanks to the Levi9 project: they simply logged into their bank account, their transactions were automatically analyzed, and they were able to receive their credit limit on the spot.
With standardized communication between services through APIs and clear regulations, open banking is the perfect playground for technological innovation.
We’re supporting your race at your own pace. Choose yours!
'A partnership in anticipation of rapid growth’
Vision and ambition are becoming a reality
Future Insight envisages a great future for the digitalisation of processes and activities within the public domain. Their platform helps people to make the right choices in projects involving the living environment, such as infrastructure, development and the repurposing of sites and buildings. In partnership with Levi9, vision and ambition are becoming a reality. ‘The partnership enables us to think big and also put our ideas into action,’ says CTO and co-founder Rick Klooster.
The company provides three separately developed software solutions designed to optimise collaboration and shape successful projects: Clearly.Projects for construction projects; Clearly.BIM for optimum viewing and interrogation of BIM models; and Clearly.3D-City for accessing 3D-city models (Digital Twins). All three involve software that brings together and makes available data from different sources.
‘This means that the parties involved in construction projects, works on physical infrastructure, or the life-cycle of a building can access the right up-to-date data and tools to enable smart collaboration,’ explains Klooster. ‘We have built our reputation on this. In 2022, Clearly.BIM secured us the Building Smart Award in Estonia, based on the implementation of a BIM-based planning permission process.’
Open Urban Platform
With the assistance of Levi9 Technology Services, these solutions are being further developed and increasingly integrated. ‘Together, we’re building a growing ecosystem of all kinds of services,’ says the CTO. ‘This Open Urban Platform enables us to easily access specific solutions from third parties. Some of these relate to AI. As a result, we no longer need to develop everything ourselves, and other parties can serve part of their market using our platform.’
The Open Urban Platform is part of the Future Insights Clearly.Suite. It represents the combined vision and ambitions of CTO Rick Klooster and CCO Bas Hoorn. ‘It’s a dream that’s becoming a reality,’ says Hoorn, originally an engineer. For his part, Klooster has a lot of experience working with government. Future Insight has now become a genuine knowledge company with highly qualified specialists.
Expansion
In anticipation of its rapid growth, Future Insight decided to expand its own development team with external knowledge and experience in 2021. Klooster: ‘The key aim is to achieve a stable and rapidly scalable platform. In view of the intended growth, we also aim to organise ourselves and our teams more effectively and professionally. That’s why we made the decision to have a partner develop our technology.’ Following a thorough search, Klooster and Hoorn came into contact with Levi9.
‘They offered the competencies and quality we were looking for in those key areas. The partnership not only enables us to think big, but also to put those ideas into practice,’ says the CTO.
It has been a genuine partnership from the outset. ‘We have a similar culture and way of working,’ explains Albert Klingenberg, account manager at Levi9. ‘Future Insight are completely open with us about their vision and strategy. That’s reciprocated. This openness gives us the opportunity to keep each other on our toes and make real progress. Our people also really love working on the Future Insight platform.’
Important motivator
The founders are convinced that there are genuine global opportunities when it comes to sharing information in urban areas. For example, there are as yet no widely embraced worldwide standards when it comes to smart cities, buildings, areas, and infrastructures. With the help of Levi9, Future Insight is determined to and capable of playing a crucial role in this development. An important motivator in this is the desire to address this global social challenge.
It is also helped by the fact that government authorities across the world are increasingly open to the cloud. CTO Rick Klooster: ‘That will potentially result in an increasingly wider application of international standards in the future. It will then become increasingly easier even for smaller municipalities to engage and collaborate in an integrated way without any form of vendor lock-in.’
Future Insight has the wind in its sails. The company has doubled its staff numbers in the last six months alone. ‘Things are really moving, and we’re working on all kinds of things simultaneously and clearly proving remarkably successful. The people working for us based in Serbia are actually our development department. That’s where our ideas really take shape.’
Source: itexecutive.nl
Rick Klooster, CTO Future Insight
Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Albert!
AWS re:Invent — Niners share their experience
AWS re:Invent — Niners share their experience
Lazar Veljovic, DevOps Architect @ Levi9
1. Levi9: a proud AWS partner
Back in 2016, a few enthusiastic people from Levi9 decided to start our journey towards AWS partnership. With no certified people at that moment and practically just a bit of experience with AWS it was an unexpected journey. Moving forward to 2022, more than 120 certified people, 2 AWS competences, 1 partnership program and soon to become Premier Tier partners. We almost succeeded in all, however our ambition is to achieve even more.
2. AWS re:Invent — keynote speeches recap
After many successful years of our partnership with AWS, this year we had a great opportunity to visit AWS re:Invent for the first time. Re:Invent 2022 was all about data and sustainability.
Keynote on the 2nd day presented by Adam Selipsky, CEO of AWS confirmed the importance of data and sustainability these days. Amazon OpenSearch offered as fully Serverless, Aurora zero-ETL integration with Redshift and Redshift integration with Apache Spark were just a few of the important announcements during this keynote. Most probably the biggest star was AWS DataZone, a service that helps customers to share, discover and govern usage of data across the organization.
We were amazed by the fact that AWS has more than 600 different instance types available. However, they’ve announced a new instance type powered by Graviton3 processor with 200Gb/s network bandwidth making it ideal for network intensive workloads.
AWS has big plans when it comes to renewable energy as well. By 2025 their plan is to become 100% powered by renewable energy and by 2030 to be water positive which means they will return to community more water than they’re using.
Keynote presented by Swami, VP of Data and Machine Learning, for sure confirmed the importance of data and technologies built around it. Focus was on tools for every workload, performance at scale, removal of heavy lifting as well as reliability and scalability. Several new features were announced with a goal to fulfill the above, like Amazon Athena for Apache Spark, Elastic clusters for Document DB and Amazon SageMaker ML support for geo spatial data.
Between keynotes and various other sessions, we were amazed by re:Invent organization and logistics for more than 50.000 attendees. It was interesting to see how many people are needed to support all the events happening around the conference. Like, 20 waiters standing in a huge restaurant during lunch and pointing out where you should sit in order to optimize availability of the seats, or around 50 people checking your badges and pointing directions for the masses during keynote speeches.
The last day’s keynote was held by AWS CTO, Dr. Werner Vogels who gave a great speech about world being asynchronous and how engineers should think in that way while implementing software. A few new announcements were made, like Amazon CodeCatalyst, EventBridge Pipes, Application Composer as well as Step Functions Distributed Map during this speech. The main takeaway is most probably EventBridge Pipes which helps with connecting various data sources and simplifying implementation of event-driven applications.
3. Learning from AWS partners @AWS re:Invent
AWS gives a lot of attention to their partners and during the conference we could hear many great stories and achievements from many companies. Even more, there was a keynote dedicated to partners where many partners had a chance to present their solutions. Most impressive was financial company from Brazil with 10 million accesses per day, 2500 microservices, 1 million API calls per minute which concentrates 10% of all payments in Brazil. They managed to scale from 100.000 customers to more than 20 million customers in just five years! Their example taught us that re-engineering of complete platform and moving it to the “elastic” cloud environment gives a great opportunity for large scaleups. AWS Certified people were also recognized during the whole conference. We’ve managed to meet many inspiring people, some of them having all 12 AWS certificates, which is definitely a great success.
4. AWS re:Invent takeaway — Levi9 is on the right track
By meeting some of the great attendees as well as AWS employees, we’ve realized that we as Levi9 are doing great things and moving in the right direction. It was an amazing experience to compare us to some of the biggest AWS customers and partners. Even though we aren’t the biggest partners compared to all the giants that were there, nor the biggest customer of AWS, our strategy and our goals are leading us into a bright future. So, who knows, with enthusiastic Levi9 people and great energy, we might become one of those giants in the future. 😊
After all, re:Invent is a great place to be. With all the sessions that you can learn from, opportunities to meet experts from all over the world, it is also a nice place to have a bit of fun as well.
‘A partnership that brings out the best in everyone’
°neo - the new ‘banking grade’ SaaS platform
For the last decade, five°degrees has been supplying leading financial institutions with a tried-and-trusted core banking product. Last year saw the launch of a completely cloud-native version that will enable customers to continue to meet ever-changing market requirements even in the long term. The partnership with Levi9 proved to be instrumental in the development of the new platform. ‘It was exactly the type of synergy we needed,’ confirm CEO Martijn Hohmann and CTO Jeffrey Severijn.
Instead of modernising the existing solution, the decision was made to redesign the new one from scratch. It is now ready for the market. ‘Unlike the old stack, the new ‘banking grade’ SaaS platform – known as °neo – is component-based,’ explains CEO Martijn Hohmann. ‘Over time, the number of building blocks will gradually increase. It will also become easier to link external services and ecosystems together.’
Five°degrees and Levi9 Technology Services have been working together for seven years, but the interaction has entered a new phase in the last four years. ‘At management and shareholder level, we’ve been considering how to get the most value out of our relationship for some time,’ says the CEO.
‘Levi9 is really determined to create value on the business side for customers, partly because that’s also interesting for their own employees.’ – Hohmann
Added value
‘Delivering value for business is the holy grail for developers,’ says CTO Jeffrey Severijn, who also has ultimate responsibility for the °neo platform. ‘Being able to offer employees a challenging working environment is important for Levi9. As far as our traditionally designed °matrix solution was concerned, our relationship was gradually entering the danger zone. Developers were just losing their enthusiasm for it.’
According to the CTO, there was also another factor at play, ‘Outsourcing partners are facing significant increases in salaries in their international branches caused by the COVID-19 pandemic. That eats into the price advantage. Companies like these now need to distinguish themselves in different ways. For example, through specific market or domain knowledge that we can make use of.’
There is added value for five°degrees if a player like Levi9, with six delivery centres in Eastern Europe, has experience with the Azure cloud, for example within a media company. ‘They can then discuss applying those competencies for other customers, which has advantages for everyone: partner, staff, and customers. It even opens up the potential of wider co-creation in the future.’
It was during a frank discussion with Levi9 about all of this that things started to move forward.
‘We had talks with various different players about modernising our existing core banking product, but we achieved very little. However, Levi9 came from a different direction and suggested adopting a totally different approach using the latest methods and techniques. That brought a fundamental change in our partnership – and one that added value for all of us.’ – Hohmann
Start-up strategy
Around 80 people – half of the total capacity – worked on °neo over a three-year period. The completely new platform features lower variable costs, offers much more flexibility and has an important role to play in the company’s global ambitions. Severijn: ‘Together, we approached it as if it was a start-up: from the initial design to a minimum viable product before going on to develop a technical MVP. All of it agile and focussing on the potential for rapid upscaling. Ultimately, it all turned out well.’
The partnership is based around the shared objectives that the two companies aim to achieve as a team. ‘It really is a joint initiative,’ say the CTO and CEO. ‘Our people sometimes struggled with that. It was no longer a case of “we ask, and they do what we say”, as you would approach it with development teams in India or Vietnam, for example. Everything is based on a relationship of equality.’
According to the management at five°degrees, this has brought out the very best in everyone, starting from a blank piece of paper. Severijn: ‘The contribution made by Levi9 was primarily technical: the Azure cloud, event-driven architecture, microservices, and so on. Our main focus was on the functional aspects. We then came up with a lot of great ideas together.’
Speeding up the process
Examples of this include work relating to Logic Apps for integrating apps, data, services, and systems using automated workflows in Azure. ‘During a demo with the key engineers from Microsoft, it turned out that we were further advanced than they were on some points,’ says Hohmann enthusiastically. ‘It was so good in fact that they even adopted some of our ideas.’
As for the partnership, the CEO is keen to stress how productive and enjoyable it is. ‘If people from Levi9 have ideas about how things can be improved, we’re always keen to listen. In return, we gave a presentation about the future of banking to their development team in Serbia.’
Jeffrey Severijn: ‘During the demos, members of the Levi9 team also made regular contributions. That helps create ownership and mutual understanding. Even when there are challenges, if something takes slightly more time, or if the requirements are unclear. The fact that we already knew each other allowed us to speed up the process that little bit. Ultimately, developing a new relationship always takes time – but we understand each other’s strengths and weaknesses.’
‘All in all, it was exactly the synergy we needed to make a success of the °neo project,’ concludes CEO Martijn Hohmann. ‘I’d even go so far to say that it would have been impossible without it.’
Source: itexecutive.nl
Martijn Hohmann, CEO five°degrees
Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Wesley!
'From concept to American market in record time – by working together'
Incision Assist
Incision has become a worldwide success with its video-based education and training for operation room personnel. Now, alongside its trusted e-learning environment, Incision has launched a new digital assistant, developed in partnership with Levi9. This tool went from idea to market in record time. ‘We started in June 2021, and four months later, it was live,’ says Raimo van der Klein, Chief Product & Technology Officer. “And a year later, we were already running pilots in American hospitals.’
The mobile app is a strategically important expansion of the Amsterdam scale-up’s existing e-learning platform. ‘Now we can offer users direct support before and during the surgery,’ says Van der Klein, who stepped into his role as CP&TO two-and-a-half years ago. ‘This new digital product is crucial for the relationship with the end user.’
Incision Academy has been around for a while, offering e-learning accredited by international medical associations to train medical professionals in operation room-related activities. Now, the new Incision Assist tool gives those professionals direct access in the operation room to all relevant information: instructions, manuals, requirements, and the personal preferences of the performing surgeons – and all this for a wide range of procedures.
Co-creation
Incision Assist was developed in record time in a process of co-creation with Levi9: from concept to worldwide market launch in under one year. ‘The challenge was that there was relatively little we could reuse from our existing e-learning environment. And on top of that, with Assist, we’ll become part of the digital infrastructure of the hospital, so part of a strongly regulated IT environment. That requires enterprise-grade software.’
'We can help the people in the operation room with any procedure'
Incision was founded in 2014 by Dr Theo Wiggers and a group of investors. It is now a cutting-edge collective of driven doctors, software developers, marketers, and professionals working to grow the business from its home base in Amsterdam. ‘We believe in sharing surgery-related knowledge and skills and making them available to everyone,’ says Van der Klein.
The newly developed app will be a boon to students, personnel in training, and temporary staff in particular. ‘With Incision Assist, we can help operation room staff with any process and any procedure, in a uniform way. Its advantages are better preparation, reducing risks, improving team functioning and, ultimately, better medical outcomes. In combination with Incision Academy, this gives us a rock-solid proposition.’
Increasing impact
About the co-development process with Levi9, Van der Klein says, ‘They had also been involved in the development of the existing e-learning platform. From the moment that we decided to increase our impact in the digital domain, we worked out the best approach in a very effective dialogue. The goal was to, in a relatively short amount of time, put out a product that was at least viable, and that we could then scale up to the highest quality requirements fast and efficiently.’
Incision and Levi9 as partners went through a hyper-fast learning phase in a working process in which they used feedback from customers and users to add more and more features to the app. ‘At this point, we’re going forward with the expansion and upscaling of the platform based on modular technology.’
'Without Levi9, we'd never have been able to market a product this good, this fast'
Van der Klein calls the joint development process a transformation within a scale-up. ‘And that in an industry that’s super-complex, that’s struggling to fill jobs right now, that’s heavily regulated, and that has extremely high standards for security. Because of these challenging and above all complex dynamics, we knew we needed a technology partner that could give us the right people, quality, and working methods. Without Levi9, we’d never have been able to do this as well or as fast.’
Source: itexecutive.nl
Raimo van der Klein, CPTO Incision
Want to know how we can help you to accelerate your business? Leave your details below or get in contact with Ben!
Make your job harder and 10 other ways to adopt a total ownership mindset
When Codrin Băleanu was a junior software developer he used to print out his code on paper. He would select a particular intricate piece of code, send it to the printer, take the papers with him, and read them quietly. He would read until he saw the workflow in front of his eyes, until he could visualize the data flowing as smooth as rivers.
Now an Engineering Lead at Levi9, Codrin describes himself as simply a person who gets paid to do what he likes. And he credits most of his career advancement to that attitude that made him read code on paper until he understood it completely: Total Ownership Mindset.
Table of contents
Own the project
1. The customer business is your business
Own your code
2. Pick up the trash
3. Wreck something
Own your time
4. If you are 100% busy, you have no time to think
5. Innovate. Innovate. Innovate
6. Change your way to work
7. Be lazy
Own your progress
8. Identify the people from your future
9. Own your job to own your purpose
10. In a changing world, your mindset is the constant
Own the project
“Total ownership” is a concept that gets thrown around a lot during agile meetings. It might sound a bit intimidating, as it sounds like people are expected to do much more than their fair share and to place work, the customer, or the project above everything else, including personal life. But Codrin says the concept is completely misunderstood.
“I think of it like a car I just bought”, says Codrin. “It is mine, I take care of it. I treat it with care, I don’t want it to get scratched, I don’t want to smash it into walls.” A car owner might seek to always improve his car, buying accessories, equipping it with new gadgets, and finding ways to make it run better. “In the same manner, if I own my work — be it a customer, a product, a task — I take care of it. I want it to work better, faster, and to be more interesting.”
In other words, total ownership does not mean that your work never stops, but rather that you treat it as if it’s your job to make it better.
Here are 10 pieces of advice from Codrin about how to approach and boost your ownership mindset.
1. The customer business is your business
The first rule of the ownership mindset is to understand the business of your customer and understand how that business creates money, part of which will end up in your pocket. If your customer has an issue, you’ll be able to move mountains and do anything that needs to be done to solve that problem. You might end up solving problems that are not part of your expertise or technology, but that will help you grow. This is the root of total ownership.
Part of owning a project means understanding that you and the customer are fighting for the same goal. “Listen to the customer when he talks about business”, advises Codrin. “Your mind might be tempted to wonder, but if you understand the business, you’ll be able to open conversations, reframe your proposals from a business point of view and get your point of view across. “
Own your code
When you hear “be the owner of the code” you might be tempted to think “of course I am, my code is my baby”. But that’s the opposite of what it means! If your code is your “child” and you get defensive about it being cut, changed, transformed, you are harming the product and business. Ownership means always looking for ways to make it better, at the cost of your own ego sometimes.
2. Pick up the trash
When you walk on the street and see a piece of trash, you will probably take it and throw it in the bin. You can do the same in a project: if there’s a part of work that nobody wants to touch — a procedure, a database — own it. Make it your goal to fix it, repair it, solve it.
Refactoring is part of the same mentality of picking up the trash. For example, if you have a 2000-line Javascript program, don’t be the one that adds another 100. Refactor. Clean up after yourself, don’t postpone this for a future date, because you’ll never get to it.
Refactoring might not be part of the job description or existing inside a story point, so you have to convince the customer that the process is essential. However, try and explain it not from your point of view (“this code is messy”), but from the point of view of the customer. Focus on the value that refactoring will bring, such as: the code working faster, it’s easier to extend or it’s easier to maintain and repair if there are any bugs found. No product manager or architect or customer would refuse the cost. The condition is to bring value.
“Here is my rule”, clarifies Codrin. “If I repeat a line of code a second time, I consider a “yellow” warning. If I have to repeat the same line of code for a third time, I stop and I refactor. I never broke that rule.”
3. Wreck something
Once you have an ownership mentality, you will understand accountability. Once more, the concept of accountability sounds scary, because it’s often associated with blaming. Codrin Băleanu sees this differently: “accountability is seeing the bigger picture and asking yourself: Is there something that could be broken if I change this one line of code? Don’t be afraid of failure. Unless you experience it, you’ll never be a good engineer. Wreck something.”
After a bit, this attitude gives you more time to innovate, learn or research. And this — as you’ll come to see — is the only way forward.
Own your time
One advice of Codrin for those who want to adopt a total ownership mindset might be summarized as “Don’t be the British Empire!” Sounds easy enough, right? But here is what it means.
“When the British Empire was at its peak, one of the reasons for its success was its ability to take people who were completely unprepared, place them in a factory and have them produce luxury goods, without any training. They had reduced manufacturing to such a degree, that any person was expandable, a cog in the mechanism.” While an admirer of British Empire history, Codrin warns that “If you repeat everything ad infinitum and do everything the same for years and years, you become expendable. The industry will disappear.”
A developer will never feel motivated and engaged in a British-Empire-like process. Simply repeating other bits of code does not leave you content. Being a developer means having the space to be creative and innovative and that also means pushing against being busy all the time. Developers are creative beings.
4. If you are 100% busy, you have no time to think
“If at this moment, you already know what you’ll do in the next 3 or 6 months, then that’s a problem. This is Agile done badly”, says Codrin. Cramming the schedule with tight-fit plans leaves no space for innovation. You cannot bring anything creative into something that has been planned for the next 6 months.
“When we are blocked by work, we don’t have time to think. Always push against this. And you do this by continuously improving processes, so they are better and allow you time.”
5. Innovate. Innovate. Innovate
Monoliths will always fall, just like all the previous monoliths fell when Netflix appeared. In old companies, processes are what they are, people are working and business is going just fine. But all the while, someone from outside is looking at those processes, analyzing them, and seeing spots that can be done better. This is why process innovation is key to staying relevant.
6. Change your way to work
Things tend to quickly get into a routine, but routine is the death of innovation and creativity. You always need to change something — sometimes as simple as changing your way to work. Another example is how you approach a story point, change a technology or change your entire playing field. In time, this will help you to not be scared by anything new, because change will be ingrained in you. You will stay relevant to the market.
7. Be lazy
One of Codrin’s favorite advice to young developers is to “be lazy”. By that, he means to be very critical with the time they designate for writing code. “Sitting in front of a computer for 8 hours does not make you a software developer.” You need to always have the mindset of “what else can I do? ”. Or, on the contrary, the work might be boring —then find a way to make it interesting. “For example, if you just type data, write a script that automates the process. Make the machine work for you. Be lazy.”
Own your progress
As a junior, Codrin used to look for the hardest, most scary thing to do. “I was scared by many things: Linux, databases, VIM editors, the cloud. As I felt overwhelmed by the new, the only solution to that was to learn. I would use a book, a tutorial, or a video.”
“If your job is too easy, make it harder.” This attitude sums up Codrin’s approach to how the ownership mindset, together with continuous innovation ties up to professional progress.
8. Identify the people from your future
In the end, this ownership attitude is something that benefits not just the customer and the company, but the developer himself. “Recognized seniority comes with hard work and involvement. Seniority is something you gain. It is not given to you”, he says.
As a practical advice on how to push yourself on the road to a higher rank, Codrin says to look for the people coming from the future: your future. Identify the people you want to be like 5-10 years from now, as they represent your future. Learn from them to increase your chances to end up like them.
9. Own your job to own your purpose
The road to seniority is peppered with “why”. “Why is this needed? Why does the customer want this? Why do we do things a certain way?” Having the answers to why gives you not only ownership, but a sense of professional purpose. When projects are too complex and opaque to understand, Codrin encourages team members to look for the right person to ask questions, until they gain a deeper understanding of its purpose. “If you don’t understand why you do something, and what is the purpose to which you are contributing, you’ll never like what you do.”
10. In a changing world, your mindset is the constant
IT is an industry of permanent changes. One company rises, and three others fall. As systems get more and more complex, ownership gets distributed and it gets more and more difficult to understand who is in charge of what. The only way to navigate the continuously transforming landscape is to have this one constant mindset: total ownership.
We’re supporting your race at your own pace. Choose yours!
Data Lake as an answer — The evolution, standards and future driving force
Data Lake as an answer — The evolution, standards and future driving force
Aleksander Bircakovic, Data Teach Lead @ Levi9
Back in 2006 when the phrase “Data is the new oil” was stated for the first time, it teased a possibility of a new trend which might be the next big thing on the horizon. Yet, probably no one at the time anticipated the role data would play in the near future and the ways it would affect the technology stack, accompanied by a plethora of architectural principles and a variety of new job opportunities.
Today we are witnessing the rise of industry in which the most profitable enterprises are those which own the data and are directly or indirectly generating profits from it. Advanced analytics, predictions and reporting are driving the implementation of data-driven business model. This is a very dynamic area that has witnessed many trends and architectures designed to enable data collection, processing and storage of large amounts of data while meeting the needs of scalability, security, automation and complying with legal regulations.
Although it is difficult to find an industry in which this model is not applicable, the digital marketing industry stands out as perhaps the most dominant one primarily because of real time bidding and smart targeting. Other domains that rely heavily on data are market analysis in regular sales chains, logistics in various forms, and with the growing popularity of the IoT concept smart cities, the automotive industry, transport, and many other domains that are using data to optimize processes.
Here in this article, we will touch upon some of the technologies and architectures that make these things possible in practice and share the experiences, challenges, and dilemmas we encountered through various projects, primarily emphasizing Data Lake architecture on the Cloud platforms as one of the most popular at the moment.
When talking about data-lake architecture, it implies designing a system that collects and stores data from many different sources in a way that enables cheap and scalable storage of structured and partially structured data with the possibility of performing the transformation processes aiming to create the necessary data projections. Projection is the result of a process that transforms raw data, enriches it with data from other sources or performs aggregations and finally stores it so that the data structure and the format itself are clearly defined and usable. Depending on the case, technology stack builds upon two main principles, data streaming and batch data processing, quite often combining both approaches. Although there are cases when the near-real-time approach has its place, when it comes to data lake architecture, the ‘batch’ data processing usually plays more significant role.
Designing a Data Lake: cloud or on-prem system?
There are many parameters that play an important role when it comes to designing a data-lake system. If we look at some of the most common challenges and problems in implementation, it is inevitable that we will come across things like scalability, cost optimization and investment justification, the richness of the ecosystem of tools that are suitable for the given problems and the possibility of their integration, as well as issues of availability (SLA), reliability, ease of maintenance, speed of development, legal regulations and data security.
Bearing this in mind, perhaps the first question that arises is whether developing the system on the on-premises infrastructure or on one of the Cloud platforms better fits the case.
The enormous number of migrations of existing systems to one of the Cloud platforms, as well as the large number of projects that start as Cloud solutions is no coincidence. The following chapters will dive into the common challenges and things that should be considered when designing a Data Lake while point out some of the advantages of Cloud platforms.
Efficiency and scalability
Assessment of the current needs and prediction of potential growth can be a challenging task. When talking about on-prem system, it is necessary to assess the current needs as well as the potential growth in the upcoming period in order to put together a business justification for securing the funds.
On the other hand, Cloud Platforms usually charge for services based on used or reserved processing power and used storage, and with this billing model, they enable quick start of the journey towards an MVP solutions. As the complexity of requirements increases as well as the amount of data, the Cloud platform system can be easily scaled up. Storing data in the form of blobs is usually very cheap and practically unlimited. Database servers can be scaled as needed with the allocation of stronger instances, while processing power in the form of code packaged in containers or distributed systems that are terminated after the work is done is charged according to the used processing power and other resources. Tools like AWS Glue, Google DataFlow, AWS Cloud Functions etc. are just some of the options that offer those capabilities.
Data Catalog and service integration
Storing structured data in one of the formats such as AVRO or Parquet with an adequate hierarchy of folders (or paths) that enable efficient partitioning is very popular and, by many criteria, efficient and cost-effective way of storing data. Within the AWS platform, the tool that enables usage of data stored in files giving them semantics and making them discoverable and suitable for querying is called Glue Data Catalog.
The idea behind the Data Catalog is to create a unique interface for interaction with data regardless of whether the entity in the Catalog is stored in a database, file on S3 or some other type of storage. This approach enables outstanding integration possibilities between different components within the AWS platform.
Let’s take for example the fact that AWS Glue jobs, which are one of the ETL tools offered by platform, can manipulate data from different databases and combine it with data from S3. The same data can be queried via Athena SQL-like queries directly, and in cases when it is needed, it is even possible to create a temporary Redshift cluster that will perform some heavy processing after which it can be terminated. All of that can be orchestrated with some kind of orchestration tool such as Step Functions and automated with some of the “infrastructure as a code” services like CDK.
The existence of Data Catalog as well as direct integration between components such as various types of consumer and producer tools for message queues, process orchestration tools and the possibility of implementing event driven architecture leave many possibilities for designing complex architectures. That kind of freedom is difficult to achieve in the on-prem systems.
Reliability, maintenance, and security
There is a huge variety of tools and frameworks that play a particular role and represent a vital part of the data pipelines and overall data architecture.
To mention a few of them, let’s take for example a data streaming platform Apache Kafka or data storage and processing framework Hadoop with all the tooling within the ecosystem that they have. Now let’s imagine the maintenance of Kafka and Hadoop clusters, the necessary monitoring of the vital components of the machines, such as:
- free disk space,
- processor and memory allocation,
- sharing of hardware resources with other applications (shared & noisy hardware) and users,
- failure of one node in the cluster and redistribution of the topics to another or,
- in a slightly more extreme case, the termination of the master node.
Let’s consider upgrading the version on the system that is in use or expanding the system by adding physical machines or disks. These are some of the challenges teams face when working on on-prem systems. Of course, those systems undeniably have their place, primarily in organizations that want their data not to leave the organization due to regulations or in organizations that already purchased hardware. Many big players have their own hardware, and in those situations development on that hardware is usually a better option, but in other situations when this is not the case, it is hard to ignore the advantages that cloud solutions bring.
Some of the benefits that stand out are reliability in the form of responsiveness guaranteed by the Cloud provider itself through its SLA, replication between Availability Zones in case of incidents, maintenance of versions and libraries in services, scaling when needed, integration and optimization as well as security itself, which is implemented on many levels starting from the network, through encryption up to the granular access control policies.
Cost optimization
We talked about integration, security and ease of use, but in all of that I would like to specifically pay attention to the importance of understanding the service billing model, and through just one of many examples show how some decisions drastically affect the costs.
Let’s take encryption as an example. File encryption in a Data Lake is a very important layer of data protection and access control. AWS offers several options for choosing the type of key through its KMS service so that it is possible to find the optimal ratio of control over the encryption key and price depending on the needs of the system. Decryption of files happens every time user queries them. Imagine teams of data analysts regularly querying large amounts of data directly from files using tools like AWS Athena or scheduled queries that run regularly. As it is a centralized system that will be used by many teams and other systems, and the intensity of querying is very high, the optimal choice of the type of key plays an important role in optimizing the costs of decryption and the total costs of the entire Data Lake system.
To sum things up, yes, it does sound attractive but beware of the costs! Think about what security level do you really need. Difference between SSE-KMS and SSE-S3 can be $1.000+.
AWS, GCP or Azure? Similar concepts, different skin
In the previous examples the primary focus was on AWS tools, but not surprisingly, similar possibilities are offered by other Cloud platforms as well.
While working on different Cloud Platforms, one cannot but notice a repeating patterns and similarities. Most of the services designed for a specific type of problem (storage, serverless execution, orchestration, message queue and so on) can be found on all platforms. In addition, some of the tools that these platforms offer come from the open-source world. They are optimized to fit within the platform and there is a layer for integration with other components on top of them. In that context, if we look closely at the implementation of AWS Glue Jobs, we find the well-known Apache Spark, which is not surprising considering that the very syntax used to define Glue jobs is actually PySpark under the hood (for those who decide to go with Python). The apparently new concept of DynamicFrame is nothing but a DataFrame just additionally optimized and with interfaces for easier integration with other services. If we look at the GCP rival named DataFlow behind the curtain we find nothing else but Apache Beam, which is again more than obvious considering the syntax for defining jobs.
Examples are many, however the point is that many of the tools that the platforms offer are based on open source tools that have been around for a long time. Implementations in the Cloud environment are based on some slightly more traditional concepts, but Cloud platforms make the job easier as they reduce the need to maintain physical machines, guaranteeing reliability, security, scaling, and offer significantly simpler ways of integrating and orchestrating processes.
Now, let’s compare some of the frequently used services and tools that are part of a certain platform which are intended for the same kind of problems. We will take into account the three currently most popular Cloud platforms: AWS, GCP and Azure.
Code execution in the form of small tasks with fast response time and limited resources is something that undeniably fits into many scenarios in data lake implementations and in many others. All observed platforms offer such a service. As part of the AWS platform, we are talking about the AWS Lambda service, GCP offers us this possibility through the Google Cloud Functions service, while on Azure it is called Azure Functions. All services are event driven, serverless with minimal differences and most of them are related to statistics like maximum resource allocation and execution time.
In addition to standard SQL and document-oriented databases, a Data Warehouse intended for analytical queries is also an inevitable tool offered by every platform. The symbiosis of such databases and data lakes will be mentioned in the following sections, and the tools offered by the platforms are Amazon Redshift, Google Big Query and Azure Synapse Analytics. The main differences between these services are in the way of resource instantiation, the way of scaling, as well as the mechanisms for optimizing query costs (sort key, partitioning…).
Batch processing of large amounts of data, as an extremely important segment in the design of the data lake is supported through services such as AWS Glue, Google Data Flow and Data Prep, as well as a very rich set of Azure tools, including Azure Data Factory.
Up until now we were dealing with technologies and environments suitable for data lake development. Now, let’s have a look at some types of expansion of that architecture and check out some trends that came out as a response to challenges of both a technological and organizational nature.
Data lake and lake-house
Although Cloud Platforms offer many possibilities when it comes to querying data directly from Cloud Storage, a slightly more traditional and very common type of data projections is storing data in OLAP databases, i.e. Data Warehouse. They are much more suitable for analytical querying bearing in mind that they have built-in mechanisms for optimizing the speed and cost of queries through partitioning, and the fact that they are columnar databases lowering the amount of data scanned in a query. Many visualization tools or tools for advanced analytics offer simple integration with such databases. Possibility of data modeling in a star or snowflake schema and transferring part of the logic to stored procedures make them a very logical choice in many situations. This symbiosis between the storage of original data in the form of blobs (data lake) and projection in OLAP databases (data warehouse) is called a lake-house architecture.
This approach covers a variety of use cases. For some of the most common ones, let’s take for reference the tools offered by AWS. In cases where it is necessary to perform quick ad-hoc queries on the data in the Data Lake, AWS Athena is a perfect fit, while in the case of complex analytical operations and reports, choosing a DWH solution, specifically AWS Redshift, usually makes more sense.
Data lake or data mesh? Technological or organizational dilemma?
Being a centralized system, as by its core nature data lake is, brings many benefits but has it’s downside as well.
Let’s for a moment think about scalability but not in terms of computational power but rather a scalability in terms of different domains that a single data lake can integrate with. There is also wide variety of business domains that data lake team probably don’t have experience with or a full understanding of how they operate and integration with all sorts of 3rd party systems, many of which can be closely encapsulated within it’s own ecosystem. Building, maintaining and monitoring such system is a though challenge and the only logical answer in this scenario is decentralization.
The concept of data mesh architecture has been introduced with the main idea being redistribution of responsibility at the corporate level, in which the central data team is responsible for providing a system and a clearly defined way of integration with it, while the delivery of data and final use is the responsibility of the teams in whose domain the data is, i.e. teams that own that data. Looking at things from this perspective, the difference between these two approaches is primarily of an organizational nature, while the stack of technology used in both approaches is quite similar.
Data lake layers
When talking about layers in a data lake, it usually closely relates to a different segments of the architecture like data ingestion layer, data processing layer, insight gathering layer, etc. Now let’s put things in slightly different perspective and talk about layers as different stages of data itself in a data processing pipeline and the phases of data transformations that are usable for certain groups of people but highly sensitive to be exposed to others.
Layering of data lakes serves multiple purposes but the most common one is related to data sensitivity regulations and privacy. Usually, right after the ingestion, data is stored in a raw format without any transformations and cleansing. This layer is never exposed to the users and serves a purpose of backup and a source for building data projections. On top of raw data, usually as soon as file arrives, the first transformations are being applied resulting in a more structured files, stored in a different location and this is where things become interesting. Up from there data goes through multiple transformations and each of them results in a layer that can serve a different purpose. For example, one data projection can serve as a fully anonymized projection that is perfectly suitable even for external usage and usage by different teams while some other projections can contain more sensitive data for internal usage. Some of the layers can incorporate a set of transformations that can ease the querying later on or store data using different partitioning patterns and even formats.
Another common one is decoupling of the teams and groups so they can build their own solutions on top of their own layer without affecting any other users. They can rely on one of the parent layers as a single source of truth and use it as a backbone for building a completely custom-tailored solution for a specific group of users or completely separate client incorporating the transformation rules, security requirements and access control completely separately from the other users.
Solution as a service — Databricks
What at some point might be a “next big thing” in the industry sooner or later becomes an industry standard which usually attracts the attention of large development teams or groups of enthusiasts with the goal of developing a solution that encapsulates an entire ecosystem of tools and offering it as platform or service. Some examples would certainly be the Confluent platform around the Apache Kafka ecosystem and Cloudera around Hadoop technologies.
Since our topic is data lake, in this section we will focus on Databricks — a service that offers many elements of a data lake and data warehouse as a managed solution and represents a kind of example of how a data lake that additionally includes many advanced tools for analytics could look like if it is offered as a service.
Databricks combines data lake and data warehouse solutions, offers options for team collaboration, and goes one step further by offering tools for BI and machine learning. It also covers many standard use cases and scenarios that are based on established good practices, primarily talking about how to store data, data querying, data catalog and system monitoring. When choosing between custom tailored solutions and the approach in which we completely rely on the capabilities of the SaaS platform there are many factors, few of them being the billing model, the specifics of the system that is being developed, and the size and structure of the team. It is certain that such services have their place and are worth considering during the initial design of the solution and currently it is an example that will undeniably be followed by many and strive to improve even further.
Conclusion
This article briefly covered data lake as one of the popular approaches when it comes to creating a backbone for data-driven business model. We saw what is the primary role that it serves and how it can be additionally expanded so it covers a wide variety of use cases.
We considered the advantages of developing a data lake in the Cloud and had a glimpse into what the near future brings in the form of fully managed solutions that encapsulate the entire ecosystem of tools.
We also touched upon the similarities between different cloud providers and the core concepts that were used as a solid base for building specific tools and services and how the understanding of the core concepts will make this journey much easier, maybe even up to the point where cloud-agnostic development will not be that much of a challenge and even become a standard.
As stated at the very beginning, this field is not a new thing, but it certainly is dynamic. At this moment there are many good practices that have proven to be effective and many architectures that gave an answer to wide variety of challenges. Also, bearing in mind the growth trend of the generated and stored data, one thing is certain – new challenges and new interesting technologies are yet to come.
How to organize a hackathon to awe your customers and boost team morale
Hackathons can be fun and engaging activities for your team, as well as incredibly useful for the business of your customers. However, organizing a hackathon is not so straightforward. Organize it at the wrong moment, in the wrong way, or for the wrong reasons and you might find yourself with a demotivated team and no valuable business results.
We compiled this easy hackathon organizing guide, taking you step by step through the decisions you face and the best strategy for making sure that both your team and the customer will have a smile on their faces at the end of it all. This guide is based on some of our recent customer hackathon experiences and it includes real-life examples.
Start with the why
“Hackathon” is a buzzword. It sounds good on a company website, it looks impressive on a customer report. But this is not reason enough to have one.
Hackathons are a great way to boost morale, engage the team in an interesting initiative and let the creativity flow. From the point of view of the customer, this activity might encourage creative development and bring value-adding features to its product or service. It’s a win-win!
Beware, though. If you just do it for the sake of the team, then your customers might not be on board and might not use any of the proof of concepts you showed them. This can prove extremely demotivating for the team and they might not be so excited about another hackathon the second time around. On the contrary, if it is only your customer who pushes for the hackathon, but the team is not enthusiastic, then organizing it will just create added pressure and stress.
Earlier this year, we organized a hackathon for a world-renowned transportation company, right after the first release of the driver app about which you can read more here. It was our customer who proposed that we organize it, with the dual purpose of relaxing the team, after a stressful period and encouraging creative development.
Time it correctly
Great! You have decided that a hackathon is just what you and your customer need… so when should you schedule it? Let’s start with when NOT to schedule it: before a release! As a rule of thumb, it’s best to avoid all the stressful periods during a project timeline. Otherwise, topping tight deadlines and delivery pressure with the obligation to think creatively for two days simply adds stress and is completely unproductive.
You should schedule the hackathon at a time when the team is not in “survival mode”, putting out fires left and right. It may be after a big release, or it may be during a period when you anticipate an ease in the issues that arise. Check the calendars, to make sure nobody is on holiday during that period. Make sure to agree on the timeline with the customer.
You should also leave a significant period of time to pass between the announcement and when the event actually takes place: aim for at least 3-4 months. This in-between time allows new ideas to rise to the surface, issues to brew, and new features to pop up in discussions between teammates or with the customer.
What about the duration? 1 or 2 days is the norm. Peak work and creativity cannot be sustained for long periods of time. Alternatively, if you just allocate a couple of hours for this fun activity, it’s not enough to come up with a useful result.
Collect ideas
If the hackathon is not very rushed, then your team has plenty of time to come up with ideas about what they want to do: maybe new features? A new UI? Implement a new technology? Don’t say no to anything just yet. You’ll do that at a later stage.
Both seniors and juniors can propose ideas. If you have a junior team that has never gone through a hackathon before, they might be uncomfortable proposing new and daring ideas. Don’t force them. The first hackathon will be a learning and calibrating experience, teaching them what to expect on similar occasions, what is feasible, how high can they dream, and how limited they are in implementation. The more senior members can set a good example and come up with ideas to be later presented to the whole team.
The customer might also have some ideas of their own. Maybe they’d like your team to tinkle with a new feature or to creatively solve an issue they noticed.
At this stage, all ideas are welcomed.
To keep track of them, you can do a brainstorming meeting or keep an open list.
Select the ideas
Take a look at that idea list. You will not be able to do all of them. But how to select what to work on?
First, use senior members to refine the list. As they have better exposure, they know what can be done. They have interacted with several frameworks and technologies and they already know if ideas such as a watch app, machine learning, or Siri are something that makes sense to be implemented for that particular project. They should also be more aware of the business value of each idea, keeping in mind that the end result is to add value to the customer’s business or service.
Second, validate the list with the product owner: does the customer have something they really want the team to be working on? Is there something they feel would be a complete waste of time? Third, take a team vote: what do they want to work on?
During one of our hackathons, we let our team members vote and decide who wants to do what. They were over-the-hills excited! Our final work order was a list of 2-3 topics per platform and everyone got their pick. No member of the team was forced to work on a topic they did not want.
Who pays for the hackathon?
Your team will work for several days straight on these ideas, so make sure you clearly know who is financially supporting this. Customer hackathons are usually part of the project management timeline and the project management budget. However, make sure this is very clear for both sides. As the hackathon might cost the customer some money, you might find yourself in the position of having to “sell” this idea to your customer. In this case, your clear “whys” from the first step will be very helpful.
The day of the hackathon
If your list was correctly put together, then everyone will eagerly await the hackathon. According to the list and the team members’ choice, they will normally be split into groups of two or one. A hackathon day is pretty much like an ordinary day, without all the meetings. Just make sure there’s plenty of food and coffee to go around.
End with a demo
This here is one of the hottest tips we can give you about organizing a hackathon: demo, demo, demo. End this intense, fun effort with a presentation. Each topic on your work list should have resulted in a tangible result that could be presented to the whole team and the customer. Have each group and/or person present his or her ideas in an engaging, visual manner for maximum impact on the customer: from new widgets to Siri voice activation — ideas only come to life if they are properly packaged.
Demos are more than simply impressing a customer. They are great tools for stepping into the shoes of the customer and understanding its needs and the business value of that particular feature or idea. More often than not, developers do not think about these criteria when they are working on something or making project decisions. Presenting something to a customer puts things in perspective and allows for a better understanding of the business expectations, needs and issues.
And one more thing: if the team is aware that they will be expected to deliver a demo at the end of the two days, they will go the extra mile.
Sometimes, the customer will ask you to do another demo, at a higher level. The more people in the audience, the better it is for the team.
Such was the case of our most recent hackathon. It ended with not one demo, but with several demos. As the first demo, inside the group, went very well, the Levi9 team was asked to make a demo at KY level.
How to measure success
How do you know if your hackathon was successful? Both parties of the hackathon should be quite pleased. Teams should feel more content with their work, after experiencing the immediate impact and reaction they got from the customer during and after the demo. The customers should be enthusiastic about some of the features that were presented and ask for implementation. Keep in mind that even if you fell in love with one of the new features, the customer still might not choose it: the business value will prevail in the final decision.
More often than not, you will know that your effort was successful once you hear this question from both the members of your team and the customer: “So when will we have another hackathon?”
We’re supporting your race at your own pace. Choose yours!
Giarte ITX 2022 results are in and 86% of our customer would recommend us!
We’re very proud to report that we’ve recently scored great results for the 11th year running in the 2022 Giarte ITX review.
As it turns out, customers rate us very highly when it comes to reliability, communication, skills, empathy, willingness, and openness. Not bad, huh?
This year’s theme was IT experience benchmark, and with our customer focus, it seemed like a match made in heaven!
Obviously, we’re thrilled with the news but we are never one to rest on our laurels. That’s why we’ll continue to strive forward and keep improving in all areas. That’s just who we are!
Three Big Outcomes For Us
There were several key points about our performance that we were particularly proud of.
1) Trust We achieved very high marks on Reliability and a stunning 86% of customers would also recommend us.
The average Levi9 trust score is above our peer group of service providers. This score is curated from trust indicators such as reliability, communication, competence, empathy, willingness, and openness.
2) Customer Focused Levi9 has scored a whopping 84% on the Customer Focused category.
Customer focus is the foundation of our work. We are proud this is recognised and valued by our customers.
3) Knowledgeable and Competencies We scored an 83% in the Knowledgeable and Competencies section, again an above-average score compared to our peer group.
At Levi9 We continuously invest in our people, offering them learning programmes and 360-degree mentorship opportunities where everyone can learn from everyone else. We also offer several online learning resources and certifications, as we believe learning never stops.
This year, 50 customers responded from our customer list, which is a vast majority and says a lot about our relationship with our customers. They are willing to participate and for that we are very grateful.
One of the main reasons for participating in Giarte´s ITX report is to benchmark how we’re performing internally and externally, so that everyone understands the real story behind our business.
To The Future
Levi9’s healthy development over the past years has been a testament to all the hard work from all our levi niners. Because we’ve placed our customers at the heart of our strategy, we´re even more responsive to our customers’ needs.
Overall, this very positive result reflects Levi9´s confidence and customer-centric philosophy. We can take pride in how our customers express their appreciation – which is testament to all levi niners who work with passion every day to deliver the best possible result.
We would also like to thank all our customers who have placed great trust in us and all our levi niners over the years. That is a massive compliment to us all!
A Bit About Giarte
Since 2002, Giarte Research has recorded customer organisations’ experiences of their IT service providers and currently works with around 42 service providers. They ask their customers all sorts of questions around collaboration, gain detailed insights and analysis on market position, and advise IT organisations on both the supply and demand side.
Bringing the tachograph’s simplicity back to drivers’ fingers with a transportation app
From the tachograph for trains in 1844 to fleet management solutions, the transportation industry was among the first to adopt modern safety measures for its beneficiaries. Our customer is definitely there to drive the shift, not only from a safety point of view but also from the technological point of view.
They approached us with the project to develop a tailored drivers’ app, with a friendly UX that gives the driver the features of a modern tachograph at the tips of the finger, for both users of Android and iOS.
Their old solution was a complex tool that suited everyone’s needs — administrative staff and drivers would access it alike. As such, a driver would stumble upon needless and distracting information. What they needed was a straightforward app, flexible enough to respond to drivers’ varied levels of tech-savviness and device types.
The Levi9 Approach
Levi9 saw the potential in this project and drew inspiration from the origin of all these safety systems, the tachograph. This instrument’s simplicity revolutionized the safety of transportation and inspired lawmakers to integrate it as a mandatory component, to prevent accidents. According to a large-scale international survey cited by the European Commission’s Road Safety Thematic Report – Fatigue ”between 20 and 25% of the car drivers indicated that, during the last month, they had driven at least once while they were so sleepy that they had trouble keeping their eyes open”. Components such as these ensure the long-time safety of drivers and the general public.
Tiberiu Grădinariu, who is a driver himself, took the wheel for this project. “I usually drive with my children in the car, so I try to be as careful as possible. However, my instinct is to drive fast and hurry, so my driving style is a mixture. Carefully in a hurry.”
Tibi’s driving style reflected in the way he approached this project: a heightened attention for safety in the transportation app, as well as a drive to solve problems fast and in the most efficient way. This is what led him to a brave solution.
The multiplatform solution
Levi9 proposed a cross platform approach that optimizes the resources required to maintain the app long-term, both for Android and iOS. Before we tell you what we chose, here are the options we considered.
We chose the “new kid on the block”: the Kotlin Multiplatform Mobile. While quite fresh on the market, it brings advantages such as code sharing platform, access to all platform APIs, sharing of what makes sense, and a single logic codebase. When you add the simple maintenance and the ability to revert to native, you get a clean architecture and the ability to share as much common code as possible. According to KMM, over 50% of the participants in a survey share the following between platforms: networking, data serialization, data storage, internal utilities, algorithms and computations, data synchronization logic, and state management. The key benefit of KMM is having consistent logic between iOS and Android apps.
All good developers know that continuous learning and adapting is an important part of the job and this project helped us embrace a new technology that uses Kotlin and Swift. The iOS developers appreciated the challenges of becoming comfortable with Kotlin and Gradle and the KMM paradigm for accessing platform APIs and the Ktor and Koin libraries.
Most of the encountered technical challenges were on iOS and consisted of linking the share code library with the iOS project, Kotlin/Native concurrency, accessing local files in share code, debugging, and Kotlin concepts not available in Swift.
Result: A Cross-Platform Transportation App
Levi9 delivered a cross-platform app that has a high performance, a beautiful UX, and great functionality for the end user. The approach of the modern tachograph brings new functionality such as the day’s driving performance, distance and average speed, remaining driving time and when to stop driving, rest times, vehicle information such as fuel, reductant, and oil levels, the ability to create defect reports, and more.
To be continued: A hackathon for brave ideas
After the pilot release of the app, the team organized a hackathon. The context was suited for more experimentation and more courageous ideas.
Thanks to our client’s enthusiasm and openness to new ideas, the transportation app will soon have some features developed during the hackathon.
Or as ideas for a second hackathon.
If you are curious how you can organize a hackathon for a client, with spectacular results, stay tuned — we’ll soon publish a guide based on our experience.