When optimizing operating costs at a production facility, the bias is toward that which can be measured. For example, the cost of chemicals additives for a process is something easily measured based upon the amount consumed. For components such as the energy to produce steam, the costs may not be so clear.
Emerson’s Barbara Hamilton
shared a couple of stories with me about how cost optimizations changed once unknown costs could be determined.
The first example from a pulp mill, was where consistent bleaching of the pulp stock was accomplished by keeping the inlet temperature to the oxygen delignification tower constant. Oxygen was the bleaching chemical in this case.
Barbara noted that the temperature operating target is determined by the process designer, but small fluctuations in setpoint are up to the operators’ discretion. Setting the temperature up a few degrees can save bleaching chemical and setting it down a few degrees can save steam.
The Pulp Mill Operation Manager knows exactly how much the bleaching chemicals cost, but the impact of steam is not as “real”. Even if there are internal charges from the Powerhouse, they are typically done monthly and do not address incremental costs. The impact of operating temporarily away from the design inlet temperature is not readily apparent. Continue Reading ▶
If you subscribe or follow many of the process instrumentation and automation publications, web sites, and/or social channels, you know that the Internet of Things (IoT), also known as the Industrial Internet of Things (IIoT) is a frequent topic of conversation. Microprocessor-embedded sensors and final control elements have a long history in our industry and the Internet opens up even greater possibilities to take advantage of the data they collect and process. Emerson’s Pervasive Sensing strategies imbue IoT by combining innovative sensors with analytical software built on human centered design principles (HCD), together coupled with expertise.
I came across a great description of IoT and its application in Pervasive Sensing strategies by Emerson’s Bob Karschnia
in terms of what these advancing technologies mean for process manufacturers and producers.
Bob noted that the additional information provided by these pervasive sensing devices provide ways to automatically improve performance, safety, reliability and energy efficiency in production facilities.
These improvements occur as a result of:
- Collecting data from sensors (things), much more cost effectively than ever before because they are battery powered and wireless
- Interpreting this data strategically, using subject matter expertise to effectively analyze the data, either locally or remotely
- Presenting actionable information, built on task-oriented HCD principles, to the right person—either plant personnel or supplier-provided experts, and at the right time
- Leading to results in performance improvements, when personnel take corrective action
IoT starts at the sensor level where pressure, level, flow, temperature, vibration, acoustic, position, analytical and other sensors collect data and send this collected information to control and monitoring systems via wired and wireless networks. Continue Reading ▶
Process measurement devices are installed where required to monitor, control, and safely shutdown the process. But often, additional measurements combined with the existing ones can help to improve several areas of process operations performance.
I saw a chart from Emerson’s Jonas Berge
which highlighted four areas for potential improvement—reliability and maintenance, energy efficiency and loss control, health, safety & environmental (HS&E), and process operations productivity.
Some applications to consider for improvement in reliability and maintenance include pumps, blowers, air-cooled exchangers (fin-fans), non-process compressors, cooling towers, corrosion monitoring on pipes and vessels, valves, instrumentation, vibration, temperature, and acoustic testing rounds.
Energy losses can happen in many areas including water, compressed air, gas & other fuels, electricity, and steam. Additional measurements can help detect steam trap failures, heat exchanger fouling, cooling tower fan issues, relief valve seat leakage, and unit-wide energy consumption abnormalities.
Although health, safety and environmental extends to people and work processes, additional measurements can assist in operating safely and in regulatory compliance. These measurement applications include emergency safety shower and eyewash station monitoring, manual and bypass valve position monitoring, relief valve and rupture disk release monitoring, shutdown valve position confirmation, hydrocarbon leak detection and effluent discharge. Continue Reading ▶
If you’re like me you may have you wondered why with all the U.S. shale oil production, there are calls to change federal law to allow exports even though the U.S is a large oil importer.
The answer comes to me in an excellent Chemical Engineering Progress (CEP) article, Working with Tight Oil
, by Emerson’s Tim Olsen
. The article looks at the impact of tight oil, also known as light tight oil, from shale on refineries.
These refineries have been updated and modernized over the years to process heavier, more sour grades of crude oil. Tim notes that refineries are:
…designed to process crude oil of a particular composition and produce products with specified properties, with some flexibility based on the capabilities of equipment such as pumps, heat exchangers, and the particular catalysts within the reactors.
Crude oil blending of two or more sources is performed:
…if a single crude oil with the required composition is not available or economical.
The classification of sweet or sour crude oil is determined by its sulfur content with sweet crudes containing less than 0.5% sulfur and sour containing greater than 0.5%. The classification of light or heavy crude is measured by its specific gravity: Continue Reading ▶
Author: Dennis Tkacs
Why do computer driven systems fail? It’s a subject I want to develop over the next few blog posts. And by fail I don’t mean technically, but rather why do they fail to meet the business cases under which their procurement was initially justified? There are multiple reasons and none have to do with technology. Rather, it has to do with governance and to fully understand the implications we have to start with the evolution of computer and control systems.
Place a DCS [distributed control system] and an IT system at a distance and it’s difficult to tell the two apart. Both have made extensive use of commercial off the shelf technology that includes computing platforms, operating systems, displays, and networks.
True, the DCS incorporates specialized components such as controllers and I/O that are unique to its specific mission of controlling a process plant but, by and large, both leverage common technologies. There are differences though, unseen differences that are embedded in the governance policies under which each are procured, implemented, and maintained; policies that are deeply rooted in the past when control and IT systems had little in common.
Control systems are descendant from hardware-centric loop controllers that were densely mounted on panels and desks in a control room. Given this hardware centricity, inflexibility, and high installation costs, such systems were expected to last 15-20 years, and well they could, but then things changed.
With advent of the DCS in the late 70s single loop controllers morphed into faceplate displays on a monitor and algorithms executing in a remote (distributed) multi-loop rack, linked together via a proprietary data highway. Functions that had previously been accomplished in hardware were implemented in software. Signals that had moved over dedicated wires now used serial communication. Continue Reading ▶