<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=323483448267600&amp;ev=PageView&amp;noscript=1">

Continuous Casting in Industry 4.0

Episode 11

Continuous Casting in Industry 4.0

espresso icon Espresso 4.0 by
Wizata

 

The continuous casting process is one of the key processes within the steel manufacturing industry. How does the continuous casting process look in industry 4.0? What benefits can we look forward to in terms of optimizing production?

As mentioned, continuous casting is a key to the process of producing steel. But, it can also be a bottleneck. There are several opportunities or several points that could be optimized through advanced analytics. 

For example, we have predictive maintenance on specific equipment like cutting torches and predictive quality. Also, there is the domain of energy optimization that can be done through advanced analytics, machine learning, and everything else that falls into the category of digitalization and innovation.  

Predictive Maintenance in the Casting Process

Speaking of predictive maintenance, it's one of the buzzwords within industry 4.0. But can you bring the idea closer to us in more specific examples relating to continuous casting?

If we consider predictive maintenance, there are some opportunities for continuous casting. For example, avoid stoppage and downtime, avoid failures, and plan maintenance efficiently without impacting production or impacting production as little as possible.  

Continuous casting, as you mentioned, is part of a bigger process. We have the blast furnace or the electrical furnace and everything that comes after casting. So casting could also be a bottleneck - if we stop casting, we will likely have to stop the furnaces.

So, it is not only a limited issue or a limited problem but rather something that could extend further and generate more costs and losses in output.

Continuous Casting Optimization Tools

If we consider a process engineer in a steel manufacturing company or a steel mill responsible for the continuous casting process, how does his day-to-day look in a smart factory? What are the tools that they are using to help optimize the process? And how are those tools helping? 

Advanced analytics in the steel industry could provide alerts for specific deviations or recommendations to take specific actions. For example, if we take the cutting torches, there may be a small piece of equipment like the potentiometer that needs to be replaced. It's degrading little by little, and the intelligence, the logic, whatever we want to call it, could indicate and predict time to the equipment's failure or degradation status.

Another aspect is cleaning and lubricating the equipment, for example. Intelligence can recommend or highlight to the operator when these operations are due. Those are just some of the examples that we can integrate into the continuous casting process in order to optimize it.

More than just one type of stakeholder is involved. We have an operator that's getting real-time or close to real-time recommendations. We have a process engineer who is planning a bigger picture. We have a head of maintenance and a maintenance team who knows exactly how and when to approach maintenance issues.

And on top of all that, we have a head of the plant who is saving money by reducing costs and optimizing production in terms of predictive maintenance and other examples. 

IT is another big stakeholder, especially at the beginning of the project. IT has to grant access to the data, to the infrastructure, define the infrastructure or define the requirements based on the internal policy or internal security procedures. 

How to Start With Continuous Casting Optimization?

Say, I am a steel mill and want my continuous casting process to be digitalized. I want to use some tools that help optimize it by leveraging the data, using machine learning, AI, you name it. Where do I start? How do I go about it?

The first step would be to discuss with the IT stakeholders and define the requirements based on their internal policy procedure. Whether to use the cloud or not or a hybrid approach through the EDGE, addressing cybersecurity concerns - all these factors are the biggest part to discuss and define. 

We also need to discuss some practical terms for generating connections, whether through OPC way or another protocol. But the key point is defining what we want to achieve and how we want to achieve it - what kind of infrastructure we want to set up.

You want open architecture that can be reused in the future where other parts could be added with simplicity. Not a closed ecosystem, but something flexible that facilitates the integration of these new technologies rather than becoming a bargain and something super complex.

Now, let's say we've done the architecture. We've connected, and we're consuming the data. We've even created the necessary algorithms that help us process all the parameters and created a predictive maintenance model. 

We have people that have been working with the factory for ten-plus years. They have experience. They've dealt with these assets from what they know. How do they move from what they know works 80% of the time to something that we claim works 90%+ of the time? Something coming up on the screen might not intuitively make sense to them. 

These changes are gradually made, so we don't go from zero to 100 in one day or three days, but it is a change. So, the business experts evaluate everything that the intelligence does or everything we develop together. It has to make sense from a business point of view. 

The results need to bring not only an ROI, which is the very first thing that everybody is interested in, but also it has to make sense for the daily work. It has to ease the daily routine and the daily work of the people in the plant. Everything goes under an evaluation, test, and trial.

Of course, there is also the case of false positives. Let's say we develop the intelligence and deploy it in operation in real time. The models, this intelligence will for sure need to be fine-tuned. So we will go through a period where we have some false positives. Still, with the help of the business and data experts, the model is fine-tuned until it doesn't generate any false positives and is integrated with the daily routine of the operators or engineers.

 

 

Adopting the Change

One of the concerns raised by a steel manufacturer that I had the pleasure to speak to recently was that the business experts necessary to be part of this ecosystem creation are not available to be on these kinds of projects. How important is it that they dedicate some time to creating these models to continue their work and remain competitive in the market for the company? 

This question includes two points. The first one is the fact that people need to adopt this change. It is going to happen whether they want it or not. They need to adapt to this new environment, to this new system where human checks go alongside automation or with data analytics control on the other side. 

Considering the other point - let's say we want to start the project. We define the project, define the target. We have a roadmap, different milestones, and different phases. It depends on the project's complexity, but in general terms, kickoff could take one or two days with all the stakeholders involved. 

Next, consider three, maybe four hours per week by a project manager. He should be a point of contact for various stakeholders to collect the missing information. This time is used to check if all the necessary data is collected, all expectations are met, and the communication flows smoothly. 

Ready-to-use or a Custom Solution?

A certain part of the market perceives this and is out looking for ready-to-use solutions, plug-and-play, and it does everything, as opposed to custom solutions. How realistic is that, and are there benefits to one approach or the other? 

In our own experience, both approaches work, depending on the issue. We can’t expect the automation of a complex issue or an asset, such as a furnace, to come in a plug-and-play form. Furnaces may be a different generation and collect different types or different frequencies of data. In this case, there must be some customization or tailoring involved. 

Some algorithms are ready to use, for example, anomaly detection or predictive maintenance, as we want to call it, that are autonomous. You simply connect the data or import the historical data, and the algorithm will transform and handle this data to give you a meaningful output. It all depends on the complexity of the problem you want to tackle.