An interview with Habte Woldu, CEO of Inteca and co-creator of the platform enabling the effective implementation of AI in operational practices.
As many as 95% of companies do not achieve real benefits from AI implementations, according to the MIT report The GenAI Divide – State of AI in Business 2025. Although global investments in AI technologies are counted in tens of billions of dollars, most organizations are still unable to translate their potential into concrete business outcomes.
“The problem is not in the technology itself, but in how it is incorporated into people’s daily work,” notes Habte Woldu, CEO of Inteca. “Today, AI often works alongside humans, instead of together with them. It increases the productivity of individuals, but it does not translate into the effectiveness of entire organizations. There is a lack of coherence, standardisation and learning capacity in the context of real-world operational practices.
It is the difference between technological capabilities and organizational efficiency that is the biggest challenge today. The AI revolution requires a new approach – not just automating tasks, but automating effects, understanding the context, and collaborating with intelligent agents in common, repeatable practices.
We talk to Habte Woldu, CEO of Inteca and creator of PractIQ, an intelligent platform that automates work with AI, about how to teach AI to work hand in hand with humans, how to break the barrier between experimentation and real transformation, and what foundations need to be created for AI to actually start “working for business”.
According to the MIT report, as many as 95% of companies do not get real benefits from AI implementations. Why do you think companies have such a big problem with this?
This is primarily due to the fact that most organizations think about AI in the wrong way. Today, employees use ChatGPT-type models mainly as personal assistants – to write texts faster, analyze documents or summarize information. This increases the productivity of the individual, but does not change the way the entire organization operates. This often happens “in the wild”, as part of the so-called Shadow AI, when employees use private accounts and tools without the company’s knowledge.
Meanwhile, the real benefits only begin when AI automates the entire workflow that actually generates business value. This requires a completely different way of thinking – moving from a “chat that helps a human” to an agent who performs tasks in the process on their own.
And this is where all the complexity comes in. Because if we imagine that an AI agent is supposed to perform specific steps of the workflow – alone or together with humans – then fundamental questions immediately arise:
- how to ensure that they have secure access to your organization’s data,
- how to build the right context for it so that it works repetitively and in accordance with standards,
- how to verify the correctness of the tasks performed,
- how to update his knowledge and avoid mistakes in the future.
This is a huge set of challenges that the MIT report describes very aptly. And that’s why so many companies are not yet able to translate AI experiments into real, repeatable business value.
What is the reason for this?
First of all, from what I have already mentioned — AI in the current model is difficult to “plug” into the real work of the organization. Respondents to the report point to several key barriers that make it virtually impossible to achieve scale and reproducible effects:
- the need to manually build context for each task, which makes the AI agent unable to operate autonomously,
- lack of adaptability – systems have a problem with unusual cases and situations that deviate from the standard,
- lack of ability to learn based on human feedback, which is why mistakes are often repeated,
- the inability to adjust AI to specific workflows and processes of the organization, which makes it impossible to work in a way that is consistent with the company’s standards.
As a result, AI remains a point tool, not an element that can take responsibility for the implementation of tasks from start to finish. And that’s why only a few companies today are able to translate the potential of technology into real operational transformation.
And what do you think about it?
When looking at the involvement of AI in an organization’s processes, I like to use analogies to hiring and onboarding a new employee. Imagine that you hire an extremely intelligent and brilliant person to whom you entrust the implementation of certain tasks in the process. In order for it to be effective, you need to take care of several fundamental elements: provide it with access to the data and tools it will work on, precisely describe tasks, define standards for implementation – including quality criteria.
People only become effective when they gain experience, i.e. complete a given set of tasks enough times, and when they work based on consistent standards. In well-organized companies, such standards exist; in others — they live in the heads of experienced employees.
Now let’s look at AI through the same prism. Generic AI models have a great deal of general knowledge and can make great inferences, but they don’t know the processes, tools, or standards of work of a particular organization. In this sense, they are like a new employee who needs to be introduced to the realities of a specific workplace.
And does it need similar onboarding as a human?
Exactly. In order for the AI to operate in Agent mode, taking over tasks and responsibility for completing them, it needs to be provided with exactly the same as a new team member:
- access to data and resources,
- the right tools,
- precise description of tasks,
- quality standards,
- and embed it all in a complete workflow, in which today the key role is still played by humans.
This is a complex challenge – also because current AI models have their limitations, such as the depth of the effective context, which makes it difficult to build full autonomy.
I think that for these reasons, a new specialization is being born on the market, combining the competencies of classical software engineering with process engineering. This will be one of the key competencies of the coming years.
Does this mean that a new model of work is being born?
Yes, you could say that a completely new model of work is emerging. We wake up to a reality in which AI not only supports humans with individual tasks, but is able to carry out entire tasks and even complete processes on its own.
This means developing a new paradigm of work organization — one that combines technology with methodologies that allow you to decompose the value stream into specific products and tasks. In other words, we need an approach that embeds intelligent agents in real workflows in a structured, repeatable, and scalable way.
There is talk of another technological revolution that is expected to make breakthroughs in many areas of life. But before that happens, AI needs to learn to work as a team – with and for humans. What are the biggest challenges in this context?
I have no doubt that we are on the threshold of a technological revolution comparable to the invention of the Internet or even the steam engine. However, the biggest challenge is not the AI itself, but the reproducibility and ability to learn in a way similar to how humans learn — that is, by adapting context, eliminating errors, and performing tasks in a predictable way, in accordance with organizational standards.
However, I would reverse what you said: it is not AI that has to learn how to work with people, but we have to learn how to use this technology properly. AI is a tool — powerful, but still just a tool. It is the human who defines the process, standards, quality criteria and the role that the intelligent agent is to play.
And importantly, we already have examples of processes in which AI does a significant part of the work. In areas such as software development, data analytics, and customer service, intelligent agents can take over up to several dozen percent of operational tasks — provided, of course, that they operate in a well-defined workflow, with clear context and human supervision.
Therefore, the biggest challenge is not to “teach AI teamwork”, but to build such conditions so that it can work in a repeatable, safe and predictable way – as a real member of the team, and not an interesting technological gadget?
This is exactly the question we asked ourselves at Inteca in 2023. One of our key areas of activity is the development and maintenance of enterprise software. Looking at the breakthrough that large language models have brought, and understanding that software development is essentially the process of processing information—from user intent to implementation to a running application—we began to wonder if the entire process could be automated with AI.
Importantly, it was not about giving the engineers another assistant. The idea was that the AI in the role of Agent would be able to take over most of the real tasks in the production process.
This is how the concept of PractIQ was born. We started experimenting internally — and very quickly we collided with two fundamental barriers.
The first is repetition. For the same inputs, the same AI Agent was able to perform the task in different ways—sometimes correctly, sometimes completely not as expected. It was difficult to talk about quality or predictability, and without it, process automation simply does not work.
The second big challenge turned out to be the size of the effective context. In the case of complex tasks or large data sets, the AI agent is not able to cover the entire problem at once. Therefore, the process must be decomposed into smaller subtasks, data must be divided and the agent must be provided with a properly prepared, up-to-date context.
And these are just two of the many problems we had to face while building PractIQ.
How did you manage to solve them and “teach” the agents to do it?
This question is more complex than you might think – mainly because the word “teach” itself in the context of AI can be misleading. In the industry, they are usually associated with training or teaching models, but this is only one method of working with large language models. Not always the most important one.
At Inteca we approached it differently. Since our goal was to fully automate software production – from user intent to the finished solution – we started by looking at what the whole process looks like today.
While the overall course of SDLC is similar everywhere – analysis, design, implementation, testing, implementation – each team works slightly differently in practice, has its own habits, standards and ways of performing tasks. If we wanted AI to perform these tasks autonomously, we had to find a way to explain these practices to it in a universal and reproducible way.
And this is where the breakthrough appeared: defining operational practices and the way they are formally described.
Why?
Because AI models perform best when they have clearly defined:
- Wait
- input and output,
- rules of operation,
- and examples.
When we look at the work of experienced specialists, we will see that this is exactly what they use: knowledge (i.e. a set of rules) and experience (i.e. many examples worked through).
Therefore, our task has become to break down operational practices into a set of repetitive products and tasks, and to configure AI as specialized agents that can perform these tasks.
The last but crucial element is the coordination of cooperation between agents and people, which is something we call work orchestration. It makes the whole process work coherently—and AI agents not only perform their tasks, but do so in the right order and context, collaborating with a human as a member of a team.
How to translate all this into the processes that companies created even before the AI revolution? They are often complex and non-linear. How to weave AI into them and connect it with humans?
Contrary to appearances, it is in such processes that AI can give the greatest value. If the process is simple, deterministic, and massive—like classic manufacturing processes—AI may be less disruptive. On the other hand, where we have complexity, variability and a large number of decisions to make, AI can really relieve people and speed up work.
What does it look like in practice?
The first step is to decompose the process: break it down into its component parts, identify the practices used, and understand how these elements are related to each other. Only then can you choose the parts where AI will bring the most value — and start there.
This is the challenge we faced at Inteca. The process of implementing an IT project, i.e. software development, is inherently non-linear, iterative and highly variable. At the same time, it was already clear in 2023 that AI can program — generate source code. The problem is that in order to do it well and repeatedly, it must receive the correct input: the results of the analysis, architectural decisions, the design of the solution. Only on this basis can a qualitative code be generated.
That is why we were most interested in the more difficult part of the workflow – analysis, architecture, design. In our company, we had well-defined practices in these areas, so we decided to “translate” them into a language that AI understands, so that it can carry them out in the same way as experienced specialists do.
And from a business point of view, what was your goal?
In the beginning, research goals dominated in our country. ROI took a back seat because we wanted to verify the thesis and understand what consequences this direction of development may have for our company and, more broadly, for the entire IT industry.
That is why we have also deliberately addressed the most difficult elements of the process, such as the automation of requirements analysis or architectural design. If we could solve these fragments, it would mean that automation of the entire SDLC is really within reach.
And what advice would you give to other companies that would like to follow in the footsteps of Intec?
The approach I recommend is quite simple in assumption, although it requires thorough analytical work: first, you need to understand the flow of products and tasks in the process and the links between them, and then choose those parts from which it is worth starting automation. It is good if they meet three criteria:
The ability to precisely define products, input and output data, rules for tasks and create the so-called checklists that will allow you to verify the correctness of their implementation.
Clear business value of automation, e.g. reducing lead times, reducing labor intensity, or freeing people from monotonous tasks so that they can focus on activities that bring more value.
Structural simplicity of a part of the process, making it easier to start with less complicated, repetitive workflow elements, which in the current model are tedious and time-consuming.
Your idea to solve this problem is PractIQ, an intelligent platform that changes the way you work with AI, standardizes it, and automates it. Can you explain what it is?
PractIQ was created within our company, initially to automate our own software development process. Along the way, we have encountered the same challenges that many industry reports, including the MIT report, talk about. At the same time, we observed trends that market leaders signaled before they became widely recognized. For example, it is now widely said that vibe-coding does not work in serious production processes, and AI agents need a solid input specification – the concept of spec-driven development has emerged.
In our company, we came to similar conclusions quite early. But we asked ourselves more questions: if the input for the AI agent team is a good specification, then what is the input to the process of producing that specification? And how to automate this task with the use of AI? Is it possible to find a common scheme for defining such problems, which would allow us to build solutions for each practice, not only in our company, but in every organization?
And what did it lead you to?
This is how the concept of Practice Engineering was born — a discipline that deals with the design of work orchestration between AI agents and humans, with the aim of automating and continuously improving repetitive ways of working.
This is the foundation of our PractIQ platform: a method of decomposing work into smaller parts and orchestrating tasks between a team of people and AI agents, thanks to which it is possible to repeat, high-quality execution of processes and their continuous improvement.
How do you train AI agents today?
Imagine the most experienced employees performing specific tasks in a given process and ask them to describe exactly how they perform them: what they pay attention to, what they need at the entrance and output, how they verify the correctness of execution, and give examples of good and bad execution. Then combine all these tasks and products into a coherent whole. This is what gives you the data you need to determine the operational practice for a given process — something some would call best practice.
At PractIQ, we have developed a way to collect this information in a structured way so that AI agents can be set up to act as experienced employees. It is also crucial to activate their orchestration in cooperation with people, with the right tools, data and context.
And why are practices more important than processes today?
It’s not that they’re more important, but if we want AI to really support us at work, we need to get down to the level of practice. In data-driven processes, the same step can be accomplished using different methods. For example, the analysis of customer requirements in the software development process can be done using user stories or use cases. These are two different practices, having different inputs, outputs and connections with other elements, but performing the same function in the process.
If we want to entrust such a function to AI, we need to define each practice in detail so that it produces verifiable and repeatable products. It’s a bit like a robot on a production line: its operation is fully deterministic, the place from which it takes a screw and screws it in further is determined down to the millimetre. In digital processes, not everything can be determined – e.g. the specification of the functionality, the program code or the text of the customer’s complaint. There, we are interested in an effective result of the task, i.e. a positive end result.
That’s why practice engineering was born—a method of decomposing products and tasks in a workflow that allows you to delegate them to AI agents. It’s like constructing a repeatable production line: the inputs and outputs at each workstation may vary, but in the end, the process must produce a repeatable result.
How has this affected the company’s organizational culture and processes?
In a way, we are going back to the basics. We have always placed emphasis on understanding the entire customer value stream and taking a holistic approach to software engineering – from working with the client, through needs analysis, to the finished solution.
Today, we try to make engineers aware that conceptual work and customer interaction are crucial. These are the most valued skills, but in recent years they have been difficult to expose. The development of AI shows the truth of this thesis: if someone focuses only on repetitive tasks without deep context — e.g. a programmer performing a task described in detail by an architect or a tech leader — then they must know that AI is already able to fully take over them.
The foundations that remain most important are working with a human to understand the intention, being able to translate this into a solution concept, choosing technologies and implementation practices, and a new type of engineering, which is building and fine-tuning the orchestration of AI people and agents.
Can you determine the direction in which the cooperation between humans and AI agents will develop?
In my opinion, the way of thinking about work is completely changing. We are moving in the direction where AI agents will become an integral part of the organization, opening up new opportunities, especially in harnessing the potential of the most experienced professionals.
In our company, we stop talking about traditional software engineers. We are defining a new competency – Practice Engineers – who can design, orchestrate, and optimize the collaboration of AI humans and agents to deliver repeatable, high-quality results.

