Johan Jansson2, Erik Dahlquist1,  Tomas Lindberg2 and S. R. Subramani3


1Malardalen University, Sweden
2ABB Process Industries AB, Wijkmansgaten Bnr. 339, SE-721 67 Västerås, Sweden
3ABB Singapore

email and


optimization, control, pulp, paper, modelling, model


By using a dynamic physical model that is adapted to real process data, robust mathematical process models can be created. By doing this it is possible to develop process know how from many different sources, and also to include factors, that are not easy to measure. Utilizing the dynamic model a model reduction can be performed to get a MPC, Model Predictive Control. Data reconciliation is needed to keep control of the measurements of all kind. A decision support system would keep control over the process status to support operators. The production is also optimized at several levels.


S. Bhartiya, P. Dufour and F. J. Doyle (2001) modelled a digester in one dimension, but with chemical reactions described in detailed. E. L. Bibeau, P. He and M. Salcudean (2001) have made simulation models of continuous digesters in 3D, using CFD. This gives detailed hydraulic modelling, but take hours or days to perform the calculations. Our method is to use a reasonably sophisticated model, which is fitted to real process data at a more or less frequent basis.

By using a dynamic physical model that is fitted to real process data, robust mathematical process models can be created. By doing this it is possible to develop process know how from many different sources, and also to include factors, that are not easy to measure on-line. Statistical models suffer from the fact that they are built on a specific equipment configuration. If this is changed they are no longer valid. The prediction power also is restricted to areas, where the process has been operated before. For good physical models the prediction power can be quite good also outside the normal operational area, if there is a fair understanding of the physical behavior in this region. This gives a significantly better robustness of a good physical model compared to pure statistical models, but the calculation speed can still be very fast compared to e.g. a CFD calculation of a complex structure. From the dynamic physical model a model reduction can be performed to get a MPC, a Model Predictive Control. e.g. by the ABCD method.

Still, if the measured data is of poor quality, the model predictions will also be poor. Thus data reconciliation is needed in order to keep control of all measurements. Control loop diagnostics is another important factor to include. If a control loop is oscillating, a valve is sticking or there is a bad interaction between control loops, this has to be determined. Otherwise we will also get a poor  plant performance.

It is also of interest to optimize at several levels. At the top level, we want to coordinate different parts of the plant. This means primarily to give set points to the main sections, what production rate or quality to run, and how to ramp between different qualities or production rates. At the next level, operation of each section needs to be optimized. This can be done using the same model, by putting constraints on this. By running different scenarios, the optimal way of operation can be achieved, under given conditions. If some equipment is subject to new limitations, like a pump not giving full capacity or a screen that is partially clogged, this will be handled by the optimization procedure.

Another aspect of this optimization is to determine the performance in the process and the equipment, to find out if something is drifting in the wrong direction. This can be that a valve is starting to clog or a pump is eroded, causing the flow rate to slowly go less and less. It can also be a sensor with mal functioning or a heat exchanger suffering of fouling. By bringing in information on the sensor and equipment performance, potential risks for future problems can be determined. The information is sent to a root cause analysis system, where different possible root causes are identified, by using e.g. a Bayesian network (Weidl et al 2002). The network is first developed from existing process know-how, and complemented continuously by feed back from how good the fault prediction has been. Maintenance staff or operators feed information on what was the real problem to the system after the problem has been resolved. This information is then used to update the Bayesian network automatically. The system design is also presented more in detail in Dahlquist et al.( June 2001,IFAC).


The most important factor with respect to the final fibre properties is the wood itself. Hardwood and softwood as well as straw have very different chemical and mechanical properties, but also different parts of a tree have different properties. Young wood that was produced during the first years of the trees life will have different fibre lengths and chemical composition than "old wood", produced as the tree has passed its "infancy". Even the place where a tree has been growing will be of importance. Eucalyptus, pine or oak grown at the top of a hill may have different properties compared to a tree of the same species grown at the bottom of a valley.

The size of the wood chips is another important factor. Svedman and Tikka (1996) have shown the effect on a batch process, while others, eg. M. MacLeod and T. Johnson (1995) have shown the effect for continuous cooking.

The major factors influencing the extraction of lignin is temperature, concentration of Hydroxide and Sulphide ( hydrogen Sulphide) for kraft pulp or Sulphite for Sulphite pulp. To this we also have to add residence time in the cook zone.

In simple terms the Sulphide primarily breaks the bond between fibres and lignin, while hydroxide extracts the lignin into water. If there is too little hydroxide or too low temperature, lignin may precipitate back onto the fibres. Too high concentration of primarily high molecular weight lignin also may cause back precipitation. The kappa number then will go up, and this lignin seems to be more difficult to remove again in the bleach plant, according to some theories presented, and thus should be avoided.

The amount of lignin on the fibres is usually stated as the kappa number, where

% Lignin in dry pulp = kappa number * 0.15

The delignification is normally modelled as taking place in three phases 1) an initial phase, 2) bulk delignification and 3) residual delignification (see e.g. M. Lindstrom, 1997). The first phase is quite rapid. Thereafter we get into the bulk delignification phase. Here we have a good correlation between temperature, hydroxide/Sulphide concentrations and residence time, on surface lignin remaining on the fibres. This is also relatively rapid. The residual lignin normally relates only to a few percents of the original lignin, but it significantly affects the bleaching properties and the yield. So the goal must be to have the bulk delignification to proceed as far as possible.

What can be concluded is that a high hydroxide concentration gives less residual lignin for all wood and straw species, but high hydroxide especially at the end of the cook also has a negative effect on the fibre strength. High concentration of Sulphide has lower impact, but becomes essential if the hydroxide concentration is low. Temperature does not affect the residual delignification phase as much, as it does with bulk delignification.

A good strategy should be to add excess Sulphide during impregnation, add a lot of hydroxide during the actual bulk delignification and reduce the total amount of free alkali as the actual cook is over, to reduce the risk of further break down of cellulose and hemicellulose. It is risky to reduce temperature and hydroxide to fast in the washing, as  back precipitation of lignin can be a problem. If the temperature is too high at the end of the cook, we may get degradations that give viscosity losses, as fibre molecular chains are degraded.

Poor distribution of liquor can be due to non-homogenous chip size distribution across the digester or along the digester. If there is a poor distribution across the digester, channeling can occur. The liquor will pass much easier where there is more free space between the chips. In this case there will be significant variations in kappa number, and it may be quite difficult to detect the reason for this phenomenon.

When the chips are in different sizes, the diffusion of chemicals into the chips, and diffusion of lignin and spent chemicals out of the chips, will vary. This will also mean, that large chips may have a higher kappa number than the small ones, although they have experienced the same cooking cycle. When there is saw mill fines in the feed, they may cause clogging of the screens. It is important to keep the screens clean, and to detect clogging and accompanying hang ups. If the fines move into the screen, we both risk clogging in the screen and hang ups of the chips at the wall, which will cause an increased kappa number variation of the digested fibres.

In our model, we include both the mechanical aspects like transfer of hydroxide and Sulphide ions into the chips, and diffusion of lignin out of the chips. Also different chemical reactions are considered for different types of lignin and hemi cellulose, and action of hydroxide, Sulphide and temperature. The digester column is separated into a number of segments in both vertical and horizontal direction (2D). The number of segments can be varied. Inflows, outflows, chemical composition and chemical reactions are included. The flow distribution is handled by assuming a chip size distribution. This can be varied in time in the feed. Compaction of the chips, as lignin is dissolved is considered.


The most common factor to control is the H-factor (Vroom , 1957) :

equation 1

The H-factor gives a correlation between the dissolution of lignin to the residence time (integral dt) and the temperature (T). A set of the constants is valid for a certain wood type. A specified H-factor will give a certain kappa-number and yield. If the feed flow increases, we should then just use the formula to adjust what temperature is needed. This assuming the concentration of hydroxide and Sulphide is kept constant, in relation to the wood feed. The H-factor will give the dissolution of lignin. For a different hydroxide and Sulphide concentration, a different chemical composition or a different average chip size, a new H-factor has to be established.

There is also another factor called G-factor, giving the degradation of cellulose and hemi cellulose as a function of the same product, time and temperature, but with different constants. This will reflect the "viscosity" of the pulp. In reality it is the polymer chain length of the cellulose fibres that is determined. High polymer chain length gives higher viscosity number and normally also a better fibre strength.

To get the right H-factor set point when wood is varying, we would also need a feed forward signal for the wood feed to the digester. This can principally be achieved from measuring the NIR spectrum ( Near Infra Red ) for the incoming wood. This can give both chemical composition and moisture content of the chips. In Sweden tests have been done with on-line measurements of this at the conveyor feeding the chip bin (L Axrup ,2001). As we have mentioned earlier also alkali concentration in the white liquor is very important.

Actually, we can state the lignin dissolution rate with an expression similar to the H-factor expression:

equation 2

where a,b,c,d,f are constants specific for each wood type.

Another important factor to control is the impregnation of the chips. This is done by steam blowing the chips to drive air out followed by impregnation with white liquor under pressure. When possible, it is best with a Sulphide concentration as high as possible. Sometimes black liquor is used, to enhance the wetting. It is very important to keep the chip feed as constant as possible, both the mass flow and the composition. These are often problematic to keep constant, which causes problems to keep the level constant in the digester. With the model predictions, we can see how the composition profile through the digester varies for important chemical components of the wood chip.


So far we have mainly been talking about what is included in the model. A concern is how to get the model tuned to the process, and how to make use of it for the process control. Typically we tune the constants a -f in the expression in the previous paragraph. First, we need to have something to measure, and preferably something that is measured on-line, but frequent lab measurements may be sufficient. If we do not have the right measurements, we cannot update the model. Important variables to measure frequently are free alkali, dissolved lignin and total dissolved solids in white liquor and the extraction lines. Sulphide is also interesting to follow. In the future it will also be interesting to add wood properties measured e.g. by NIR spectra, to determine lignins ( fast/slow), hemi-cellulose, cellulose and water content of the incoming wood at the conveyor. By having these measurements, we can tune the model for a specific wood mix . What we do is principally to use the prediction model to determine the dissolution of lignin and consumption of the chemicals, and then adjust the parameters until the model predicted values and the measured values are as close as possible. This is done for the cooking liquors.

Figure 2

Figure 2. Sodium hydroxide and sodium sulphide concentrations in the wood chips and the surrounding media from top to bottom of a digester

We then look at the final kappa number of the wood after the cook by measuring the kappa either manually or preferably with the SPP, Smart Pulp Platform or some similar instrument.

The technique to do this parameter estimation has been automated in the system. If data reconciliation is applied to the digester equipment like flow meters, thermo couples, pressure indicators etc, the operator or process engineer will get information about possible problems, which can affect the balance calculations. Sometimes these values can be used to compensate the values. Other wise the information is just used to discard the measurements during a certain time period, when it comes to adjusting the model.

Figure 3

Figure 3 Chip and free liquor temperature from top to bottom of the digester calculated with models with 20 respectively 50 volume elements for the whole digester

Figure 4

Figure 4. Kappa number and lignin concentration of the chips/fibres as they pass from top to bottom of the digester. The digester model has 20 respectively 50 volume steps


When a dynamic and steady state process model has been made, optimisation constraints have to be added. For the digester these are maximum yield at a given production rate and kappa number at lowest possible cost. In the paper machine area it may be to produce the right quality at lowest possible cost with respect to fibres, chemicals and energy at shortest possible time. This includes following a production schedule, including normally a number of grade changes. To do this we need to have fast grade changes, but also to limit the number of paper breaks.

We need to include also known production limitations or planned stops. To this comes the risk of unplanned stops and production limitations due to different type of failures, like eroding pumps, clogging valves, wires or screens and fouling heat exchanger surfaces etc. This gives a new set of constraints to include in the optimization. An example for the continuous digester can be seen in the figure below. In the first column we see the normal set points for the circulation loop flows, chemical dosage and temperature for a certain production capacity. The second column shows optimal set point values, to achieve the same kappa number at an increased capacity by higher yield and with reduced costs for chemicals and energy .

Table 1

Table 1 Optimization example - digester

Figure 5

Figure 5 Optimized temperature and alkali profiles compared to the original recipe

As can be seen, the economic profit is significantly higher with the optimized parameter settings, 0.074 US$/s compared to 0.048 US$/s. The difference is that the temperature is lower in the beginning of the cook, but higher at the end of the cook and during washing in the optimized case.

The optimization is made first for the whole plant, to get the overall balance for the next 24 hours. This gives a schedule for how to operate the different parts of the plant. The set-points for capacity etc at each section are then used as a major constraint for the optimization of the specific sections, like the digester above. At an even lower optimization level, also the major variables are controlled together, using MPC, Model Predictive Control. This is a multivariate control including an on-line or real-time optimization algorithm. This keeps as good operation as possible, even if some limitations occur in-between the calculations with the "optimization scheduler" above. The MPC gives set-points and keep these directly at the DCS level. If something happens to the optimization software or major sensors etc, a back up strategy handles this at the DCS level, so that the operation can proceed even without "the sophistication".

In this way we have built an optimization hierarchy that gives both advanced optimization, but also good robustness.


Several different data reconciliation methods are tested. These can be pure statistical or based on physical models. Right now we are implementing both types at several applications in the area of power plants, stock preparation/paper machine area and soon also in the digester area. For the physical model based approach, we principally use the measured values from the DCS / Information system. These are used in the model algorithms. If the balance is not good enough, one of the values is changed at a time to fit the balance as far as possible. We also get an estimate of the fault size, by taking the difference between the measured value and the calculated one from the balance.


The functions described are being implemented at a couple of mills during spring 2002.Others were used earlier. The importance of checking sensors and loop tuning is illustrated by the following example from East Asia.

There were quite a lot of paper breaks at a paper machine. The mill suspected that there could be problems in instrumentations and loop tuning in the wet end/stock prep area, and we were contacted to investigate. First, it was noticed, that the filtering of the instrument signals of the consistency and flow measurements were very high. This has hidden the fact that there were actually serious sticking problems in some valves. When the heavy filtration was taken out the real variation of these signals could be seen (figure 7) and the valves fixed.

There was also a three level cascade control loop in the wet end of the machine in combination with bad flow and consistency measurements.  Three level cascade control loops are nearly impossible to tune, when trial and error methods are used.  What happens is that the inner loops are not tuned fast enough to keep up with changes from the controllers above them.  The result is nearly always control loops that oscillate.

Figure 6

Figure 6. Filtered respectively unfiltered consistency. With si, can be determined, while not in the filtered signal

Figure 7a

Figure 7b

Figure 7. In the power spectrum to the right we can see that there is a cyclic action, and when plotted in the spectrum to the left we can see how the unstable recovered stock consistency is causing a strong variation also in the mixing chest outlet consistency

The way a three level cascade control should be tuned is by starting with the inner most control loops (inlet stock flows). These were all retuned such that they had the same closed loop dynamics. In the figure 7 above we can see the unfiltered and the filtered signals of a consistency measurement, the power spectrum of the control loop cascade an at the bottom how the poor consistency control of recovered fibres influence the consistency of the mixing chest outlet consistency. By retuning controls and moving sensors to better positions, a reduction of paper sheet break numbers by 25 % could be achieved and a reduction of sheet break duration by 29%.

In a Stora Enso paper mill and a power plant in Sweden ( ENA Power in Enkoping), as well as at a mill in Australia, data reconciliation is being implemented, to monitor sensor, equipment and loop performance. The 3d-MPC application at the digester as well as digester diagnostics using the model will be implemented at a mill in Australia (Visy Pulp and Paper) summer 2002. At a Swedish pulp mill, also overall mass balance optimization is being implemented at the present time (summer 2002) as well. Plant optimization using the tools mentioned has been performed at RAPP in Indonesia and at several other mills in South East Asia.


As can be seen from this presentation it is possible to use physical models for both advanced control,diagnostics and optimization after tuning by parameter estimation of the models. This gives both fast calculations and robustness. Our belief is that this will be the future trend in advanced control and optimization in all process industries in the future, and especially in the pulp and paper industry.


S. Bhartiya, P. Dufour and F. J. Doyle : Thermal-Hydraulic Modeling of Continuous Pulp Digester, Pulp Digester Modeling and Control Workshop, Annapolis, US, June 2001.

E. L. Bibeau, P. He and M. Salcudean: Modeling of Wood Chips Shear Forces in Digesters, Pulp Digester Modeling and Control Workshop, Annapolis, US, June 2001.

C. Lindgren: Kraft Pulping Kinetics and Modeling,the influence of HS-,OH- and Ionic strength, PhD thesis KTH,Stockholm1997.

M. Lindstrom:Some factors affecting the amount of residual phase lignin during kraft pulping, PhD thesis,KTH,Stockholm, 1997.

E. Dahlquist, L. Shuman, R. Horton and L. Hagelqvist: Economic Benefits of Advanced Digester Control, Pulp Digester Modeling and Control Workshop, Annapolis, US, June 2001.

Svedman  M. and P. Tikka : Effect of softwood morphology and chip thickness on pulping with displacement kraft batch cooking, 1996 Pulping Conference p 767-777.

MacLeod  M. and T. Johnson: Kraft Pulping Variables,TAPPI Short Course inj Savannah 1995.

Vroom K. E. : The "H" factor: A means of expressing cooking times and temperature as a single variable, Pulp and Paper Magazine Canada (,1957) 58,228

Axrup L. : Determination of wood properties using NIR. Journal of Chemometrics (2001)

Erik Dahlquist and Tomas Lindberg, Christer Karlsson, Galia Weidl, Carlo Bigaran and Austin Davey: Integrated process control, fault diagnostics, process optimization and production planning – Industrial IT,Paper for 4th IFAC Workshop on ON_LINE FAULT DETECTION AND SUPERVISION IN THE CHEMICAL PROCESS INDUSTRIES, June 8-9, 2001, Seoul, Korea

Weidl G., Dahlquist E. (2002) Root Cause Analysis for Pulp and Paper Applications, In proceedings of 10th Control Systems conference, pp.343-347, Stockholm; Sweden, June 3-5, 2002


[Home] [Title] [Author] [Organisation] [Keywords]