Non classé

Dead Time Definition in Control System

Implications for control • The downtime, Өp, is only large or small compared to Tp, the clock of the process. Strict control becomes more difficult when Өp > Tp. Because downtime becomes much larger than Tp, a downtime compensator like a Smith predictor offers advantages. A Smith predictor uses a dynamic process model (such as a DIPDT model) directly into the controller architecture. It takes more engineering time to design, implement and maintain, so make sure the cycle is important for safety or profitability before undertaking such a project. Finally, the digitization, playback and storage of the event, especially in detection systems with a large number of channels, as used in modern experiments in high-energy physics, also contribute to the total downtime. To mitigate the problem, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce read rates. [3] About authorGregory K. McMillan, CAP, is a retired Senior Fellow of Solutia/Monsanto, where he worked in engineering technology to improve process control. Greg was also an affiliate professor at Washington University in St. Louis. Greg is a member of the ISA and received the ISA Kermit Fischer Environmental Award for pH Control in 1991, Control Magazine`s Engineer of the Year Award for the Process Industry in 1994, was inducted into Control Magazine`s Process Automation Hall of Fame in 2001, was recognized by InTech Magazine as one of the most influential innovators in automation in 2003.

and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been a monthly Control Talk columnist for Control Magazine since 2002. Currently, Greg is a part-time modeling and control consultant in process simulation technology at Emerson Automation Solutions, specializing in operating the virtual facility to explore new opportunities. He spends most of his time writing, teaching and leading the ISA mentorship program he founded in 2011. In red is the response of our model-based controller to pi. The answer shows that model-based controllers can get a slightly better response than our PI controller, but this comes at the expense of robustness. A detailed discussion of robustness diagrams is the subject of another presentation « Robustness Plots – The Other Side of the PID Tuning Story », which can be found on the article page under Let`s see what sacrifice is. In a downtime process, the controller makes a change, then waits, waits, waits until the downtime has passed. Finally, the controller discovers how its change affected the process variable.

It`s like trying to drive a car blindfolded, with the passenger telling you what to do. You have to go very slowly. The first attempt to gain better control in a downtime process is to reduce downtime. Simply bringing a probe closer to the valve can do this. But sometimes there`s nothing you can do to reduce downtime. So what can you do? The advanced controller, which has proven to be very effective in downtime processes, is a proportional integral controller with the combination of special stealth setting parameters. It is important not to use derivative unless there is a first-rate delay or reaction in the process. Using a derivative of a dead-time process would make the control loop unstable. A system in which downtime is experienced is called a dead time element. If a significant downtime occurs, a worse control response can be expected as the control parameters are more difficult to adjust. Downtime is not the same as delay time.

To test our PI controller with stealth setting, we applied it to a process with 4 hours of downtime and a gain of 1. Here is the response to a load disturbance. The diagram below shows how the controller output moves to control interference. The diagram above shows how the process variable reacts. Where: Kc = controller amplification, a ti setting parameter = reset time, a setting setting setting The charge comes to zero times, but due to the downtime, the process variable does not see the fault until 240 minutes or 4 hours later. At this point, the controller responds, but it still takes 240 minutes of downtime for the process variable to respond to this controller action. Due to the downtime, this is the fastest possible response for ANY feedback controller. Another downtime elapses before the process has calmed down. Therefore, the minimum downtime, Өp, in any real control implementation is loop point time, T. The downtime can certainly be greater than T (and it usually is), but it can`t be smaller. So if our model adjustment gives Өp < T (a downtime less than the controller`s sampling time), we have to realize that this is an impossible result.

The best course of action in such a situation is to replace Өp=T everywhere when using our controller setting correlations and other design rules. To make our advanced downtime controller an adaptive controller, you need to be able to measure or derive downtime from a variable such as the speed of a machine or stream. With the new dead time, recalculate the new integral action and insert it into the controller. The key is to be able to measure or derive downtime. This type of adaptive controller is much more robust than using some kind of automatic identification technique. Attempting to identify downtime with a least squares adjustment or another modeling method that examines normal process data is prone to gross modeling errors with problems. However, identification techniques based on setpoint changes are accurate. The best way to get better control over a downtime process is to reduce downtime.

A PI controller with proper adjustment ensures a fast and stable response and can be adaptable. There are other tricks that can support the reaction. For example, applying a small filter to the process variable can smooth the response. Even if the process has a small delay, you CAN use a small derivative very carefully. In a process with a longer delay, the use of derivatives can usually support the response. Derivative, the Bon la Brute and the Ugly is the subject of another presentation available on the article page under ExperTune`s PID tuner software makes all this easy and automatically adjusts the PI or PID setting to your process, whether it`s mainly downtime, lagtime, second-order dead time, dead-time integration, or dead-time and delay integration. Downtime is also called execution or transport time. In control systems, this is the time that elapses between the point where the input signal is specified and is routed through the controlled system to the output signal. Output signals do not materialize in the absence of downtime. The causes of dead time downtime Downtime can occur in a control loop for a number of reasons: We analyze the step test data here to facilitate the calculation, but please note that the dead time describes « how much delay » occurs between the change in CO and the first reaction of the PV to this change.

Sometimes downtime issues can be solved by a simple design change. It may be possible to place a sensor closer to the action or perhaps switch to a faster response device. In other cases, downtime is a fixed feature of the control loop and can only be achieved by upsetting or implementing a downtime compensator (e.g., Smith`s predictor). A detector or detection system may be characterized by paralyzable or non-paralyzable behavior. [1] In a non-paralyzable detector, an event that occurs during downtime is simply lost, so with an increasing rate of events, the detector reaches a saturation rate equivalent to the reversal of the downtime. In a paralyzable detector, an event that occurs during the downtime is not only neglected, but the downtime is restarted, so that when the rate increases, the detector reaches a saturation point where it is not able to record an event at all.

Scroll to top