Autonomous Agents for boosting Minerals Industry KPIs
Tuning, re-Tuning and squeezing Process Setpoints
My career started on the floor of a cement factory when I had to support engineers & technical personnel with the configuration & installation of VFDs & PLC systems. The advantage was that starting from the factory floor, bottom-up, would eventually lead to better understanding of production downtime, sudden & unexplained machinery problems & most importantly the continuous attempt of fixing a valve from here or re-calibrating a PID from there. All this for the sake of a better and sustained production, forgetting the fact that over-squeezing machinery will somehow, lead to more problems than solutions.
Tricks in Engineering is Fraud
But in the Minerals Industry, challenges arise when Kiln Burning temperature reaches levels of 1200°C or when Ash clog Turbo Separators or even when milling compartment is under constant pressure of being clogged as well by moisturized material. Case in point: I moved eventually to provide some smart (stupid) software in order to have a change to improve things around; based on some software sensors as logistic regression and predictive controllers for imposing actions automatically, model-based, on production. Unfortunately, problems arose more, again, in an unexplained way and had to take the blame. Tricking industrial processes doesn’t work for the long-run; continuous & exhausting tuning of software controllers is needed.
Short-term solutions for simple domains; Long-term solutions for complex domains.
Autonomous Agents for the Minerals Industry
Then I moved to what I am doing right now. For the mineral industry the plan start by providing decision-making with our autonomous agents on kiln burning as a first step then after, the focus would be shifted on coolers, milling and ash separation. Kiln burning exhibits a lot of volatility and each piece of data have to be examined if not at every minute but maybe at each 5. The complexity in kilns and coolers cannot be handled through transfer functions or optimizations for one single reason that is that they cannot handle fat tails or in other words, rare events on the left side of the distribution. Further, on a personal level, I liked the idea that from now on, thanks to super-computing, we are able to build a probabilistic agent that can propose advanced actions and short/long term strategies for any industrial process inside a cement plant, without interfering much in control systems or monitoring rooms e.g. there is no need for building any transfer function or any finite impulse models or even there is no need to fight with operators.
Building the Feedback/Action vector
So the idea again, is to simply state what are the main or independent measurement parameters that will form our feedback/action vector, i.e. Temperature, Pressure, rotation speed of kiln or mills, Megawatt Power, etc… and then proceed to highlight the Action variables from the same vector. Those variables are usually dynamically adjusted by either human intervention or automatic control; the only difference here is they won’t be subject to any linear model but on a probabilistic distribution, due to the complexity exhibited. Again, probability of actions is essential here due to the multiple decision nature of complex industrial processes.
Configuring the Target
Next, engineers will proceed to configure their target; the same vector structure of the feedback/action vector, except that some variables, the ones subjected to a specific target will be highlighted as such i.e. If an engineering team decides to increase production at a specific level, then it suffices to simply enter the value of the expected production (under reasonable judgment of course) and then proceed to the other variables. The remaining variables, unless they are not directly affected by the target ones, can stay arbitrary or unchanged i.e. RPM probably doesn’t need a certain Target but more or less a minimum and maximum domain limits.
Initiating the Training Process
In the next step, the configured vectors are ready and once the data is collected from either the database or through an open connection from the DCS, the dynamic training and shuffling of vectors will start under a super-computing environment until evaluation shows a tangible Autonomous Agents ready to be pushed only into production lines.
Training, correction & adjustments
Now in some cases, training of vectors doesn’t show a satisfactory build of our expected agent, probably due to some missing important variables. This wouldn’t be of a problem due to the extreme speed of configuring & training agents and to fix that is to simply re-assess and include new variables to the feedback/action vector and repeat the training process. In most cases, as a service company, guidance is always provided in terms of suggesting crucial modifications without the need to interfere on-site.
Conclusion
Autonomous Decision-Making Agents can give a lot to the Mineral Industry due to the nature of its complexity whether from burning or especially the high uncertainty present in ash separation. Nonetheless, it can be applied for Waste Heat Recovery and Energy Management processes.