Navigating the Challenges of Implementing Machine Learning in Business

My name is Dmytro, and I am currently an ML Engineer at Levi9 IT Services. I have ten years of experience in data analysis and processing, web services development, and machine learning. 

   

Artificial intelligence helps businesses optimize costs and increase efficiency. This is a relatively expensive technology, but it is worth implementing, especially if you need to process a lot of information. In this article, I propose to deal with the hidden but necessary costs of MLOps and data preparation. 

   

It will be helpful for you if you are planning to add AI to your project or have already tried to improve it in this way but failed.

Working with AI and ML: Pros and cons

The needs and expectations of customers of any modern business are high, and to provide products and services that will satisfy them, it is often necessary to process a lot of data. Due to the interest of global companies in big data and analytical systems, the Data Science market is growing – it is expected to reach almost $323 billion by 2026. Businesses are investing in data and artificial intelligence, so the demand for Data Science specialists is growing. 

   

Still, working with data is not always easy, and a 2022 study aimed at determining the state of play in the Data Science industry will help to understand what exactly can go wrong. A survey of 300 industry professionals, including data analysts, data scientists, ML, and data engineers, showed that they often face the following problems: 

According to the research, most respondents believe it is challenging to implement AI elements in projects because specific models are already in place, but financial and technical capabilities are limited. 

  

If AI is implemented, it is impossible to fully meet the needs of the business because the model’s efficiency is lower than expected, it requires too many resources to support it, or it does not integrate well with the existing architecture. More details on the problems of AI implementation can be seen in the following graphs. 

Source: 2022 State of Data Practice Report
Source: 2022 State of Data Practice Report

Why are the deadlines higher than expected?

The research aptly notes that everything takes time, and the problem is mainly in incorrect expectations when planning a project. However, technologies based on artificial intelligence require more time for quality development. 

   

Machine learning is a product just like traditional software, and high expectations are also common. Today, it is hard to imagine a smartphone keyboard without auto-complete, an online store without a selection of recommended products, social networks without personalized advertising, etc. Today’s consumers are more demanding and have high expectations, but innovations don’t appear in two days.

   

Businesses are trying to meet expectations and are often in a hurry. And if there is also a lack of real experience in AI implementation, a “domino effect” occurs, and problems overlap. 

  

Specialists underestimate data collection, processing, and dataset creation, while the key to a successful ML solution is a good set of relevant data. The second underestimated problem is MLOps, mainly due to complex data and model monitoring. This is a rather broad topic that colleagues explained well in this article.

Why is AI implementation more expensive than expected?

The cost of implementing AI-based technologies is linearly related to the timeframe of the task. Paying for the team’s work, computing power, and other services for three months is more profitable than a year. 

  

However, there are other factors. For example, cloud providers actively convince you that you will receive a ready-made model by transferring data to their black box. Build, train, and deploy, and there are a lot of examples on the Titanic or MNIST datasets. Everything seems very simple, but in 7 out of 10 cases, the task is more complicated and requires some workarounds.

Why is the quality of the solution lower than expected?

Source: 2022 State of Data Practice Report

The following indicators usually assess the quality of an analysis or model in ML: 

Instead, the solution may be of poor quality if the target metrics are: 

They’ve done it, converted it, and implemented it in an app, and suddenly users complain that their phones overheat, freeze, and the battery dies quickly. It turns out that they need to shrink the model or choose a different architecture, but they can’t meet the metrics. As a result, the team has wasted at least six months of work. 

  

Business metrics are directly or indirectly correlated with accuracy metrics – the better they are, the better the business metrics are if they are chosen correctly. And model accuracy metrics and infrastructure metrics are almost always inversely correlated, although it happens that after model pruning, they are better generalized or remain at the same level. It’s always a trade-off, so we must set all kinds of metrics in advance. 

Why is the data on the production different?

When the data on the production changes, one of the following situations can happen: 

Over time, the data distribution changes, and this is normal – you need to react correctly and in time, such as recognizing data drift and re-training models on new data. This is what MLOps tools are for. 

  

If you deploy a solution and immediately see that the model behaves strangely, it’s a bad call. You may have had a data leak, no test set, or the data sample for training was unrepresentative. Data collection and processing in this pipeline are essential. 

How to do it right

This scheme illustrates the basic algorithm for implementing AI but is worth elaborating on certain stages. 

Source: 2022 State of Data Practice Report

In other words, please don’t create a substantial technical debt because it tends to grow on itself. And once you deploy the solution, you’ll need “eyes” and methods to monitor this vast mathematical graph. 

Key takeaways

The following takeaways provide essential insights on navigating the complexities of implementing machine learning in business, covering metrics prioritization, deadline transparency, leveraging business knowledge, talent selection, and investment in MLOps.

In this article:
Published:
26 September 2022

Related posts

June 26th

The Cost of Choice

Most companies spend up to 40% to much on cloud, are you? Cut spend, not options. Smart standardizations win.

Cloud cost overruns and growing technical debt rarely stem from tooling alone—they are symptoms of architectural and operational choices. This session looks at how senior technical leaders can regain control by connecting cloud spend directly to business value. We’ll explore unit‑economics thinking, ownership models, and lifecycle management practices that reduce waste while preserving delivery speed. You’ll learn how to combine FinOps principles with technical‑debt controls to create a cloud environment that is financially sustainable and technically healthy.

May 28th

AI AGENTS DESERVE AI PLATFORM

Portable patterns for Azure, AWS and GCP that survive the next upgrade

AI agents are moving rapidly from experimentation into real production use cases, but architectures vary widely across cloud platforms. In this webinar, we compare practical patterns for building and running AI agents on Azure, AWS, and Google Cloud Platform. We’ll focus on what to standardize, where to embrace cloud‑native capabilities, and how to design for security, observability, and future change. The goal is not to pick a winner, but to help leaders understand how to scale agent‑based solutions without locking themselves into fragile designs.

April 23rd

Winning on Repeat: Product Engineering in the Age of AI

Cadence, quality and outcomes over output

Delivering a successful solution once is no longer enough. In the age of AI, organizations need product engineering models that enable them to win consistently across teams, releases, and markets. This session explores how leading organizations evolve from project‑centric delivery to product‑centric execution, supported by AI‑augmented engineering practices. We’ll look at cadence, quality, and accountability, and how leadership decisions shape sustainable delivery performance over time.

April 2nd

GOVERNING AI IN PRODUCTION

Designing cloud and data platforms that survive real-world pressure

Many organizations succeed in building AI proofs of concept, far fewer succeed in scaling them safely into production. This webinar focuses on what it takes to move from experimentation to reliable, governed AI platforms. We’ll discuss platform architecture choices, model governance, security, and policy patterns that enable teams to deploy AI at scale without slowing down delivery. Designed for senior technical leaders, this session provides practical guidance on turning AI initiatives into durable capabilities that deliver value beyond the first demo

March 5th

Navigating Digital Sovereignty and Strategic Cloud Choices

How Organizations Can Balance Innovation, Compliance, and Control in a Multi-Cloud World

In today’s rapidly evolving digital landscape, organisations face increasing pressure to ensure business continuity, maintain public trust, and comply with complex regulations like NIS2, DORA, and GDPR. This webinar explores the critical concepts of digital and operational sovereignty, the strategic importance of hybrid and sovereign cloud models, and the risks of vendor lock-in.