CHEMICAL & ENGINEERING
NEWS
MAY 18,
1964
Direct Digital Control Nears Readiness A decade from now the transition of the process industries to a new control philosophy could seem, in retrospect, unusually orderly. If so, a good deal of credit can go to the guidelines for manufacturers set down by the Users' Workshop on Direct Digital Computer Control, which just ended its second meeting at Princeton, N.J. The importance of the guidelines to manufacturers and an idea of the potential impact of direct digital control (see box on next page) can be seen in one user's statement, which is echoed by the rest: If DDC proves out technically and economically in its final testing, as expected, it will automatically be considered for all future plants. The workshop first met a year ago at Princeton, where it mapped out the basic guidelines. Since then, a lot has happened behind the scenes. Manufacturers have generally adopted the guidelines as a basis for their developments. Several systems have been announced and several more are in the offing. Users have put a year of development and testing behind them, and two full-scale, final-test installations (by Monsanto and Esso) will be in operation before the year is out. The users group—35 representatives from 25 companies—thus met this year with greater background knowledge and a firmer idea of what is needed. The result is that although the basic guidelines remain the same, they have been modified in some cases and have generally been spelled out in greater detail. The common threads running throughout are reliability and economics. From the users' standpoint, computers that can be used for various types of process control fall into four broad categories: • Type I, a fixed-program machine capable of handling the control equations for 20 to 150 control loops. It would be able to perform perhaps half
again as many calculations to handle such operations as cascading, where one control-loop signal is used to influence a second loop. It would provide for alarms and graphic display, but no recording. It would be compatible with elements of Types II, III, and IV, for add-on capability. Cost of the machine would likely run about $600 to $700 per control loop and up to $1000 per loop for the smaller sizes. • Type II, a general-purpose, stored-program machine. It could handle special control functions in addition to the basic control-loop equations. It would typically handle 14bit words, and could have up to 16,000 words of core memory. Cost would be $50,000 to $100,000 ($700 to $1000 per loop). • Type III, a general-purpose machine capable of simple optimization using algebraic equations for 15 to 20 variables. It would handle 18- to 24-bit words, and have core and drum memories of up to 32,000 words. Cost would be upward of $150,000. • Type IV, a general-purpose machine that could handle all optimization including linear programing. It would handle 18- to 24-bit words, have core and drum memories for up to 100,000 words, and cost upward of $250,000. Optimization computers that have been in use for several years are typically Type III. Scientific computers are typically Type IV. The computers just becoming available for DDC fall under Type II. Type I is not yet available. Draw the Line. Users are unified in drawing the line for DDC between Type II and Type III. The original guidelines call for DDC computers with on-line availability of 99.95% (about four hours per year downtime, once per year). Optimizing computers have been giving an on-line availability of 99.5% (about 40 hours per
AVAILABLE. DDC computers, such as this Westinghouse Prodac 50, are now becoming available
year downtime). Reliability, however, is hard to define. Although a Type III could theoretically be designed for 99.95% availability, the feeling is that the greater number of components involved makes the statistical possibility of failure that much greater. Thus users do not want Type III or Type IV designed for DDC. Above the line, however, users are far from unified. About two thirds would go directly to a Type II computer for DDC in a new plant. The other third would take Type I or a combination of Type I with a small general-purpose add-on computer. The add-on could be a Type II main frame (central processing portion and memory only). Behind this split lies the differences MAY
18,
1964
C&EN
21
in individual DDC philosophies coupled with opinions on how reliable is reliable. At one extreme, the aim with DDC is merely direct replacement of conventional analog loops at less cost. This means a relatively simple Type I computer. At the other extreme, DDC is justified on the basis of improved control, not directly on the basis of hardware economics alone. This means a Type II computer, which can handle control functions of greater complexity. Reliability, or rather opinions on its relative importance, comes into play at this point. To a large extent, these opinions are colored by the types of processes used by a company. In all cases, users want manual backup for the computing system, should it fail. But emergency operation of valves and other control elements doesn't necessarily mean that a process can be controlled while the computer is down; often it just means an orderly shutdown. Some processes can go for a number of hours under complete manual control; others cannot last much more than five minutes. The greater the number of processes of the latter type that a company has, the more likely it will be to emphasize reliability. Thus, some users want 100% availability, at least in some minimum backup form though it may be less
than the best condition. They feel, therefore, that the best way to achieve this is to hold the number of computer elements involved in critical control functions to a minimum, thus holding also to a minimum the statistical possibility that critical elements will fail. Add-on computing equipment would be used for the less critical functions. This weights the balance toward Type I, or toward Type I with a small general-purpose computer added. Others, however, feel that the reliability of a computer with Type II capability (99.95) is good enough, and that possibly they can save money by getting the entire control in one package. Since the capability of a Type II is needed to begin with, this will be the route many will take. Users again unite in rejecting redundancy as an answer to reliability. The idea here, suggested by manufacturers, is to use two computers with an automatic switchover to the second computer should the first fail. By eliminating manual backup, it might be possible to break even in cost at about 120 loops, even though two computers are involved. But users want manual backup in any case and aren't willing to pay the added cost for what is, in effect, three systems. Characterize Control. Besde specifying further the use of DDC computers, users also characterized various
aspects of current control practice that will influence computer design. For instance, there is a trend toward greater use of cascade, with about 10 to 15% of control loops currently involved in this type of control. Ratios of total process inputs to total outputs (indicating, recording, control signals, and the like) are drifting higher, moving typically toward 4 : 1 . The ratios of inputs to outputs used directly in control tend toward 2 : 1 . Optimization practice will also affect DDC computer design, since in many cases optimizing computers will be resetting the DDC computers. The typical process optimized today has about 100 loops with 350 inputs. Of the inputs, at least 50% are used directly in optimizing. Optimizing cycles, which would, using DDC, involve resetting the DDC computer, are either short or long—three to 20 minutes on the one hand, or about eight hours on the other. The heaviest frequency falls at about five minutes. The workshop group, a committee of the Instrument Society of America, considered a number of other aspects of DDC in detail, such as console design and the need for more accurate sensing devices. It also set up several subcommittees to formulate guidelines for specific parts of a DDC systemvalves and valve actuators, input multiplexers, and electric transmitters.
Direct Digital Control: What It Is; Where It Stands Conventional process control is based on control loops using analog signals of continuously varying pressure (for pneumatic systems or voltage (for electronic systems) proportional to the values they represent. A loop starts with a sensing element (a thermocouple, for example) which senses a process variable and emits a signal proportional to the variable. This signal feeds to a controller, which compares it with a predetermined value of what it should be (set point) and sends an output signal to a final control element, such as a valve, which operates to keep the variable at set point. In its operation, the controller may add proportional action and reset action, depending on the characteristics of process response. The former specifies the range of output signal that will give a fully open to fully closed valve. The latter refines this by adding a correction to keep the process variable from assuming a new value offset in one direction or another from the set point. The controller operates according to an equation in which predetermined constants (gains)
22
C&EN
MAY
18, 196 4
specify the amounts of proportional and reset action. With direct digital control (DDC), a digital computer takes over the functions of all the analog controllers on a process. Signals from the sensing elements feed to an input multiplexer so that the computer can scan them one at a time. Before entering the computer, these are converted to digital signals having discrete values. Output signals from the computer may be converted back to analog or remain digital. These then go to the final control elements. A DDC computer differs from an optimizing computer in that it operates a process on the basis of predetermined set points, just as analog controllers do. An optimizing computer, on the other hand, operates with a mathematical model of a process to determine the best set points in order for the process to operate at some optimum level of production or economics. Though DDC is not yet a commercial reality, it is well along the way. So far, two upcoming installations have been announced (C&EN, May 11, page 58).
The structure of acetylene black also makes it useful where nonconductive materials have to be made electrically conductive. For instance, in aircraft tires, or in rubber and plastic tiles, the incorporation of acetylene black makes them sufficiently conductive to prevent build-up of static charge. Other uses for the product are lubricants and polishing powders.
Kalium to Start Potash Shipments This Year
RETORTS. Acetylene black is being made on a commercial scale in the U.S. for the first time at Union Carbide's new plant in Ashtabula, Ohio. The battery of 24 retorts (left) has a rated capacity of 8 million pounds a year
Carbide Begins Acetylene Black Production First large-scale acetylene black plant in the U.S. has a capacity of 8 million pounds a year Acetylene black is being made on a commercial scale in the U.S. for the first time. The domestic source is Union Carbide's 8 million pound-ayear plant now in operation at Ashtabula, Ohio. Carbide is using the acetylene black captively in the production of its Eveready line of dry-cell batteries. It also plans to sell some of the product in the U.S. and abroad, under the trademark Ucet. In the past, more than 90% of the acetylene black used in the U.S. has been imported from Canada. Shawinigan Chemicals, Shawinigan Falls, Que., has been the only producer in North America. Carbide says that domestic production will provide a safeguard against interruption of supplies from outside sources, such as the one that occurred in the latter part of 1962 and early 1963. Acetylene black was then in short supply because Canadian production was halted by a labor strike. Acetylene black hasn't been made commercially in the U.S. before because of the lack of detailed process knowledge. Also, the known markets are rather limited. The acetylene black plant is opera-
ted by Carbide's olefins division. It is adjacent to the company's calcium carbide and acetylene facilities. Acetylene, the feedstock for the process, is piped directly to the new plant. Process. To make high-purity acetylene black, acetylene is burned with a controlled quantity of air in a battery of specially designed retorts. When the temperature reaches 1500° C , the air supply is shut off. The oxidation reaction stops, and an autodecomposition reaction, that is exothermic and self-sustaining, takes over. The high-purity acetylene black produced in this way is collected and processed through compression rolls to increase the density of the product. It is packed as 50% and 100% compressed materials, which have bulk densities of 6.25 and 12.5 pounds per cubic foot. A major outlet for acetylene black is dry-cell batteries. Acetylene black's physical form makes it preferable to almost all other forms of carbon black for this use, Carbide says. The configuration of the carbon black provides the essential electric contact between the particles of manganese dioxide in the depolarizing material of the cell and the carbon electrode.
Kalium Chemicals, Ltd., expects to start commercial shipments of potash from its solution mining operation near Regina, Sask., later this year. The solution mines that supply the plant are already in operation. Completion of the refinery will permit initial shipments of potassium chloride by about October. Full operation is expected early in 1965, and production should be 600,000 tons a year of potash ( K 2 0 ) , according to Boyd R. Willett, Kalium's vice president and general manager. Kalium, jointly owned by Armour & Co. and Pittsburgh Plate Glass, plans to make bulk shipments of the potash throughout Canada and the U.S. Later, it intends to sell to Japanese and perhaps to European customers. Sales will be made chiefly to formulators rather than to the final user. About 9 5 % of all potash produced is used as fertilizer. Demand for potash is growing at about 8% per year and, in 1970, world production will be about 17.5 million metric tons ( K 2 0 ) compared with 10.3 million metric tons in 1963. In Kalium's solution mining process, hot water is pumped down through bore holes into potash beds a mile below the surface. This dissolves the potash-bearing parts of the ores. On return to the surface, the solution is refined by a process of crystallization and drying. Kalium says it has succeeded in dissolving the maximum potash ore with as little salt as possible, making the process economically feasible. After evaporation, the material goes to a thickener tank for settling, and then to a crystallizer to remove iron and other impurities. Each crystallizer makes a material of a specific particle size. Kalium expects to ship standard, coarse, and granular material. MAY
18,
1964
C&EN
23