Long-Term Effects of AI – Levi Niners Weigh In

Artificial Intelligence is seen as one of the most transformative technologies of our time, changing industries as well as how we live and work. Concerns about its potential long-term effects have been there from the start and are only growing. At Levi9, we recently organized an internal debate, with two teams opposing each other  – one supporting the benefits of AI and the other warning about its risks. 

 

The format was straightforward: three speakers discussing the positive effects of AI would compete against three speakers examining its risks. The audience was able to vote in two rounds. Was there a clear winner? 

Long-Term Effects of AI - Levi Niners Weigh In

The AI Optimists

The Revolution of AI in Healthcare

Gabriela Hanganu, department manager at Levi9, was the first pro-AI speaker. She began with an engaging vision of how AI will change healthcare. Her argument focused on the practical applications of AI that are saving lives and are already in use today.  “When we consider medicine, we picture doctors, hospitals, and treatments, but we often overlook technology  – a crucial ally in saving lives.”  

 

Gabriela mentioned a 2021 study from Lancet that showed AI models reaching more than 95% accuracy in diagnosing cancer. The implications for drug discovery are also impactful: “Creating and testing a new medication may require 10 to 15 years and can cost billions.” Gabi provided an explanation. “AI greatly speeds up this process.” She mentioned AlphaFold, an AI system that has almost solved the protein folding problem, which could speed up treatments for cancer and neurodegenerative diseases. 

 

One of the most compelling arguments  was preventive medicine. “AI can identify conditions such as Alzheimer’s, dementia, and diabetes as early as 5-6 years before the initial symptoms show,” she mentioned. “That is a very big impact.”  

 

Gabriela ended with a strong statement that would resonate during the debate: AI is not a substitute for doctors; it is a strong partner that provides them with technological advantages. Doctors that utilize AI are essential and will remain so. 

AI Will Create More Demand for Software

Java Tech Lead Iulian Macoveiciuc discussed the influence of AI on software development. His approach directly addressed the question that causes many developers to lose sleep: “Will AI take over the role of software engineers?” 

 

“This question is very simple and superficial”, Iulian argued. “We need to ask questions that are more specific.”  

 

As Iulian sees it, there are two different groups in the development community: the optimists and the skeptics.”The optimistic trend is led by creative individuals and solopreneurs who are engaged in greenfield development,” he explains, referring to the possibility of quickly developing functional new software without any legacy code to take into account. The skeptics are developers who work on large projects that have millions of lines of code, where AI currently struggles.  

 

Iulian painted a picture of the future that received understanding nods from the audience: “Imagine a future where you never hear the term ‘technical debt’ again,” he proposed. He then went on to explain how AI agents have the ability to write code better, starting with acceptance tests, actual code writing, testing, evaluation and re-iteration in a loop. 

 

Instead of threatening jobs, Iulian suggested that such developments would result in prosperity: “The world requires more software, and AI will create more demand for even more software. It’s a noncapped industry.” In good IT companies, the work is continuous -– it is only restricted by the working hours of the engineers. With the rise of AI’s productivity, engineers will create significantly more software, and companies will require more engineers to develop it. 

Software Developers will Adapt and Thrive

The third speaker in favor of AI, Mobile Technical Lead Tiberiu Grădinariu, echoed Iulian’s optimism and stressed that software developers are going nowhere yet. With one condition: if they adapt. “Job disruption is a reality, but historically, technology has always led to new job creation,” Tiberiu stated. 

 

Tiberiu addressed the job displacement issue directly: “We should not be concerned about ALL jobs being taken by AI, but instead recognize that CERTAIN jobs will be impacted – mainly repetitive tasks that, I would say, do not provide satisfaction to those who do them.” He also noted that countries investing heavily in AI, such as the U.S. and China, have seen increases in AI-related job postings, countering fears of mass unemployment. 

 

Tiberiu highlighted the importance of perspective: “We are evaluating a very young technology based on its present efficiency”, underlining that it has not had enough time to become efficient yet. He indicated recent developments such as the DeepSeek model as evidence of progress toward more economically efficient AI. 

The AI Pessimists

Education: The Atrophied Thinking Muscle

The counter-offensive was led by Senior Data Scientist Ana-Maria Cehan, who read a statement on behalf of her fellow data scientist Ștefan Gabriel Iftime.  The counterpoint focused on education. “The educational process is very human, involving the development of both mind and soul.” What happens when artificial intelligence gets involved in this process? 

 

Ana described three main ways that AI poses a threat to education: it reduces critical thinking, weakens memory skills, and hampers emotional growth. Based on research published in Nature Neuroscience, she explained how critical thinking activates several brain regions and creates stronger neural connections – a process that copy-pasting AI answers might undermine. 

 

Ana also referenced a study from Stanford University that indicated students who used AI for complex answers showed a notable decrease in their ability to retain information and make logical connections. “Memory is not only a place for storing information,” she argued. “It is the base on which complex thinking is constructed. 

 

Finally, Ana made the case that AI is causing young people to miss the connections that are inherently present in the educational process. 

Society Is Not Ready

The debate became even more serious when Data Architect Daniel Căpitanu came to the podium. Even though he manages a department that is very focused on AI development, his perspective is clear: humanity is not prepared for the technology it is creating. 

 

“Observe our civilization,” he said, showing surprising statistics. There are 110 active armed conflicts around the world. There are five trillion plastic objects that float in our oceans. Ninety-nine percent of the world’s population resides in regions where air pollution surpasses the limits set by the WHO. Every year, 100 million animals are used in experiments. 

 

Daniel invited the audience to envision that optimistic AI-painted future, where people would be provided for and no longer have to work. “It will be a short time before we start to ask ourselves about the meaning of life.” The issue, he says, is that the answer needs to be global, not individual.  

 

However,  the world lacks the cohesion of coming together and finding meaning. In a walk down memory lane to pandemic times, he recalled how easy it was to lose humanity over something as mundane as a toilet paper shortage.  

 

“We are similar to a two-year-old child who throws tantrums and breaks everything around them,” he argued. “We are not mature enough to survive without some kind of parental guidance, whether we refer to it as religion, work, or another thing that gives us structure and meaning.” 

The conclusion is not that AI is bad, but that humanity needs to take a mature approach and create the legal, ethical and social infrastructure which would allow us to evolve alongside a powerful technology such as AI. 

A Force for Evil

“In 1995, in Tokyo, five men released sarin gas at the metro, resulting in 13 deaths and 5,800 injuries.” This is how the last speaker, Iulian Prodan, Data Tech Lead, started the closing argument for the pessimistic team. With AI being able to give detailed instructions on how to build destructive devices, much worse incidents are bound to happen. 

 

Iulian referenced a publication from Nature Magazine in 2022, titled Dual Use of Artificial Intelligence Powered Drug Discovery, where a research team demonstrated that an AI trained for drug discovery could generate 40,000 toxic compounds in just six hours. Imagine them falling into the hands of well-organized, well-trained terrorist groups. 

 

Iulian also pointed out the political aspects, mentioning how the most militarily powerful countries in the world are also at the forefront of AI development. He highlighted recent developments in U.S. politics, where tech leaders were promised $500 billion for AI research funding and the elimination of regulatory oversight.  

 

This, says Iulian, would give the first company to arrive at Artificial General Intelligence unprecedented superiority over companies and countries. “History teaches us that when one group of people has superior technology compared to others, the outcomes are not very favorable for those who lack that technology,” he warned. 

The Final Vote

Have the speakers manage to convince the public to change their opinion? Levi Niners were invited to cast their vote before and after the debate, but we’d like you to reflect on the question yourself rather than reveal the count. 

 

As our colleague Iulian pointed out, that might not be the correct question. While the two opposing parties brought their best arguments forward, they both had an underlying unifying idea: artificial intelligence is a tool. Like any other tool, it can be used for good (like medical breakthroughs or cleaner code) or for bad (spying on political opponents, for example). As such, the real answer to the long-term impact of AI is not technological but human. What will humanity choose to do with it? We at Levi9 call ourselves “technology optimists,” and we embrace emerging technologies and help the industry use them for good.  

 

What is your take on this? 

In this article:
Published:
22 May 2025

Related posts