An interview with Otakar Horák: Data as the foundation for successful digitalisation

Marie Mundilová Aimtec
30. 9. 2024 | 7 minutes reading

Data was the star player at this year’s Trends in Automotive Logistics conference. It’s no wonder, because high-quality clean data is essential for making the most of the potential new technologies, digitalisation and automation can offer. We spoke with Otakar Horák about how to use these to the fullest. He deals with data, and work with data, every day as the head of a division specialising in advanced planning systems.

Otakar, one quite frequent phrase in the context of data is “garbage in – garbage out”. I don’t really need an explanation of it; I’m more interested in how true it is.

I use that expression fondly. It’s concise and perfectly accurate. When working with data, the rule is that data quality at the input affects the quality of the information I get from processing it. After all, why are we digitalising? It’s not just because we’re making our work easier, faster, more effective, automated and so on. Above all, digitalising lets us have an overview of what’s happening under our hands. Data is generated in every step of the digitalisation process. But its individual bits are worthless for us. We need to collect, interpret and compare them – in other words, to get information from them. We are aided in this for example by the tools collectively called Business Intelligence. But if you don’t have good data, the information you get from it will be misleading. And you can then make wrong decisions based on false information. That’s the main idea in that phrase – and it’s fully true.

Is it true even today, in the age of the rise of AI – or rather, will it stay true? Won’t AI handle this problem?

Yes, but only partially. Also, from a certain standpoint, the rise of AI will escalate this problem even more. I’ll explain immediately. Naturally, AI is a great tool for quick processing of large amounts of data. But watch out. Every artificial intelligence must first be “trained”. And that training requires an enormous quantity of very clean data. Remember how sometimes AI language models answer some questions nonsensically and write gibberish? You notice the mistake at first sight, you can tell. But could you also tell during an analysis of logistics processes? Probably not, since the mistake won’t feel like nonsense. If you were to view the source data with an experienced eye, you might be able to catch something off in the data. But here you’re doing tasks so distantly from the source data that you have almost no chance of catching errors. That’s why in the context of work with data, we’re increasingly hearing about Data Intelligence, which includes all work with data, from collecting it to cleansing it to overall data-handling management, with the goal of getting the best possible foundation for further data processing.

When working with data, the rule is that data quality at the input affects the quality of the information I get from processing it.

Otakar Horák, APS Solutions Director, Aimtec

That’s also one of the areas covered by the division that you lead. How did you move over from planning systems to overall data management?   

Very naturally. Because the Advanced Planning Systems, APS, that have long been a focus of our division are very sensitive to the sources they work from. With APS, we’re talking about the planning of extremely complex processes that are influenced by a number of variables and constraints. A system this robust needs sufficiently reliable data as the foundation for its work. We’ve learned that even companies which are advanced in their digitalisation can’t avoid situations where, say, some data is missing or mis-recorded simply due to human error. So what was most complicated when deploying the whole APS was catching these imperfections, which in the vast majority of cases led back to the source data. And so we created our own solution that helps us to prepare our master data for integration with both the APS and other systems.

Is in-house development worth it? Why didn’t you turn to one of the tools out on the market?

There are many data management support tools, and they’re surely high in quality. But often very robust, and so deploying them can be expensive and time-consuming. We have almost thirty years of experience in manufacturing. We know just what to look for and where errors arise most often. So we created a very lean, quickly usable solution that runs a series of tests revealing these errors. And it has proven itself. We’ve thus decided to continue its development and evolve it based on our customers’ needs. With the vision of offering a smart and useful tool that can be easily deployed and configured to fit each customer’s specifics and sector. The customer can then leave this work entirely up to us. We also guide them through all follow-on processes, such as integration and the interconnecting of various systems. We also have experience with the SAP system, which in the automotive sector is almost, I’d say, a standard, yet not everyone understands it – we do.

But you said yourself that this topic reaches beyond just data cleansing and quality. Is that true for your solution too?

Certainly, Aimtec Data Intelligence, as we call it, has much broader scope and even greater ambitions for future development. Think of our system as collecting all the data we define, cleansing that data and subsequently storing it in a data warehouse. Then the possibility of harnessing such a valuable data source suggests itself naturally. We support Business Intelligence functionalities such as the creation of dashboards, reports, KPIs and visuals. A standard set of such outputs is provided, but it’s no problem to specifically adjust or prepare entirely new ones. We also run “data pre-processing”, where we prepare the data for subsequent systems, give it some added value. These can even be very complicated mathematical functions. A solver is offered for these and allows the calculation of complex mathematical optimisations.

error_detail_report

Screen of Aimtec Data Intelligence app

All the data and all reports are available from a web application, which lets clients configure various access settings to information and views of it for different user groups. The whole system is modular; individual functions can be turned on and off. This flexibility is also supported by the fact that Aimtec Data Intelligence is a cloud solution, provided to customers as a service. That means they’re freed of infrastructure and license worries and largely avoid burdening their internal IT departments.

You mentioned you’re planning further evolution for Aimtec Data Intelligence. Do you know its direction?

We’re preparing to plug in AI, from the data interpretation standpoint the use of Natural Language Processing will be interesting, that means enabling the application to accept and answer questions for data analysis in “human speech”, just like the well-known language models can do now. But as usual for Aimtec, in the case of AI deployment, too, we want to bring customers a tried-and-true solution that has been tested on practical use cases. This system will also develop in the future in connection with the field’s best practices; at the same time, we’ll be retaining its ability to adapt flexibly, and as much as possible we’ll enable independent configurability of our solution so that it still fulfils its original function as a quickly usable tool.

Get in touch with Václav Kalina, Aimtec

Share article

Top stories from logistics, production and IT.

Subscribe to Aimtec Insights

By registering, you agree to the processing of your personal data by Aimtec as described in the Privacy policy.

Get top stories and articles
from Logistics, Production and IT.

Subscribe to Aimtec Insights

By registering, you agree to the processing of your personal data by Aimtec as described in the Privacy policy.

Related stories

loading