Building towards Industry 4.0: a practical approach

Industry 4.0 is a phrase we hear often, but what does it mean? And how to practically plan for, and implement it? Moreover, does it really take a 4th industrial revolution, or can we take a modular, evolutionary path to realize the promise of Industry 4.0 in tissue manufacturing?

One view of Industry 4.0 is the connectivity of sensors with other process information to real-time data management systems and the internet, which can inform better decision making by use of modelling software and machine learning. In this session BTG explain how they can build a network of value creation from a bottom up approach based on their subject matter expertise, sensory platforms and data modelling to address specific tissue makers’ needs for product quality and asset performance.


Data is generated when a process is measured and quantified. In tissue production, for example, the primary goal is to create a sheet with the target characteristics or properties set by the customer. To know if those targets are met, measurements must be taken and data recorded. When targets are met, it is most important to know what process conditions led to that success, i.e. what “recipe” or “formula” was successful? Again, it is data, process data in this case, that supplies the answer.

To create a successful business based on tissue production, one is obligated to innovate and look for a less expensive, slightly different recipe or production technique that produces the same or even an improved product. Clearly, process and product improvement require a structured approach to the management of production data.

The management of process data can be broken down into three categories:

1. Data acquisition – The generation of data by a sensor or a laboratory   technician

 2. Data storage – The archiving of data in some form for future use

3. Data analysis – The recovery and use of past data to make decisions regarding future actions

Much of the emphasis for enterprises wishing to move to data-based decision and process control in on the first two points here; the acquisition and storage of data.

Figure 1 Typical process data system

For big data, start small

Perhaps there is another way for papermakers and their always-busy process engineers to use data to solve real-world, real-time problems. By introducing an element of domain or subject matter expertise into their data systems, focussed, modular but scalable solutions can be developed with far less investment in time and resource. For example, suppose we want to refine the performance of our Yankee dryer and crepe operations on the tissue machine. Firstly, we define some objectives for the solution. In what ways do we think the Yankee and crepe performance have a cost or quality impact? We might come up with a list like this:

• Quality parameters
     o Bulk o Stretch
     o Handfeel • Productivity
     o Crepe ratio
• Energy usage
     o Steam per tonne
     o Gas supply per tonne
• Cost avoidance
     o Short regrind interval
     o Chattermark

These would be some of the outputs which we’d want to model in a data system. So what would be our data inputs? What process parameters either measure the performance or are potentially influencing factors in it. Again, we make a list, which might include:

Performance: How we measure against our objectives

• Yankee and reel speed
• Tissue lab data
• QCS scanner: basis weight and moisture
• Motive Steam supply
• Hood burner gas flow

Factors: what parts of the process influence performance?

• Incoming fibre blend
• Coating chemicals
• Type of crepe blade
• Age of crepe blade
• Wet-end pH
• Press roll solids
• Felt cleanliness
• Coating layer contamination
• Doctor blade vibration
• Coating spray overlap and evaporative load onto the Yankee

There may be others, but the above list will cover many of the parameters and engineer would look at if troubleshooting a Yankee. But some of these are not obvious ‘tags’ which a general data system would pick up and, in some cases, we need to add ‘fixed’ data to calculate some KPI’s from variable process data.

An example might be the spray overlap and evaporative load of the Yankee coating spray bar onto the Yankee. A bad overlap value might make a streaky coating, not uniform in the CD. A low evaporative load could make a hard coating, which gives poor bulk and softness and excessive blade vibration and eventually chatter damage. (reference 1) But these are not process tags a general data system would pick up. What we need is some fixed data: spraybar width, distance from Yankee, distance from press roll, number of nozzles and spray angle and volume per nozzle pressure, and from these we make new calculated tags. The variable process data tag is in fact the spraybar pressure. If we track this (and it can change due to filter blockage or variable pressure) then we can see how he calculated tags change. So, we used our expertise to build a model. This can apply to much of the above data, and thus a data model for Yankee performance with only the important data can be built.

This brings up another point: using our expertise, we can identify missing data. To continue with our above example, another critical factor for Yankee and crepe performance is coating contamination. There is no current objective direct way to measure this, but from other measurements we can build a model which predicts it. However, a key component of this model is the consistency of the tissue machine WW1 loop (reference 2,3) Thus our modular, expertise driven approach identifies a new data input without which the model would be weak, and this approach would specify the appropriate instrument investment to compete the model. Once again, a general top-down data driven approach would not identify this.

So here we have a module which is quick to install, easy to understand, and gives great insight into a critical operation for less than 100 tags of data.

Figure 2 A Yankee performance data visualization system

If we wanted to expand our data reach, we could think of another performance related objective, such as the delivery machine direction wet tensile strength for a towel grade. What data would we chose to model that behaviour? Fibre choice and mix, and refining data for sure. Perhaps something formation related, so jet: wire ratio. Chemical additions and the wet end chemistry.

Using our expertise driven approach, we can also think about the missing data, and maybe run some lab tests to see if this would be significant in the model and worth the investment in the necessary sensor technology. Examples of extra data we might want could include fibre morphology, post-refining freeness or the wet -end charge demand of the tissue machine.

Managing and analysing the data

As we start to grow our data system, we need to consider exactly how we do the analysis.


The starting point for a simple module, such as our Yankee Performance module, could be graphical analysis, or trends. This is the most basic form of process understanding and improvement starts with simply being able to present real time data to operators and engineers. In a paper-based world, if a variable is drifting away from a target, and is only being observed as a written number every hour or every other hour, it might take several observations or several hours before a problem is detected and perhaps even longer before an action is taken to correct the problem. If that same variable is shown as a trend line updated by the minute or second, the drift away from a target value can be noticed sooner and corrected in a much timelier manner.

Figure 3 A multibox trend

Cost-based operating

As we grow our machine data access with more modules or additional tags, a good data management system should have access to values beyond those from a single process area. One approach to improving machine operations is to take cost data and integrate it with process data to create meaningful, dollar-based targets. Displaying a pseudo-variable of lost dollars per hour determined by the difference between a grade-based target and an actual steam flow is more useful and meaningful to an operator than simply saying “try not to exceed a flow of X kg/hr”.

Figure 4 Cost based operations


Many facilities depend on grade-based centerlines to successfully produce a range of products. Modern data management supports this effort in two ways. Not only does it enable the display of centerline limits on a per variable and per grade basis, having access to large quantities of historical data makes it relatively easy to apply statistical methods to the process data to create meaningful and achievable limits.

Taking the next step

As more data is integrated, the user can take the next steps on the road to an Industry 4.0 system. The platform for this would be the previous modules, the additional process sensor data plus a more general suite of data from the entire process. This could be in the 1000-5000 tag range for a typical tissue machine line. At this point, we can begin to use more sophisticated techniques of data analysis.

Statistical Analysis

An exciting application of a modern data management system is to use it proactively and find ways to improve the economic performance of a process or machine. No matter how hard we try to hit centerlines and targets, there is always some fluctuation in process variables, leading to a range of outcomes in product qualities. Fortunately, there are statistical software packages which can analyze large amounts of interrelated data and determine the most significant relationships. Consider again the quality parameter of MD wet tensile. Where a given facility might believe their best “handle” on MD strength was an expensive wet end additive, statistical analysis of past data might indicate that on operational variable such as refining intensity can have almost the same level of control, with considerably less cost.

Statistical Modelling

The next step in the progression of sophistication of data management applications is the real time use of statistical multivariate analysis. There are situations when a significant process variable which is either expensive, or impossible to measure in real time (such as a lab-based quality test) can be correlated with a high level of confidence to a set of “standard” process variables (i.e. online pressure, temperature, flow, speed) which are already monitored. A multivariate analysis of historical data can yield a mathematical model which describes the influence of each of the measured variables on the output variable.

The model can then be run in real time and use constantly updated process variables to calculate the virtual output variable. In some circles this type of model is called a “soft sensor.”

Figure 5 Simplified model for GMT 

Soft Sensors

The use of soft sensors presents an opportunity to show tangible economic returns. To return to our example, a soft sensor for MD wet tensile would alert operators to a potential problem with time to rectify it before too much down grade material was manufactured.

The delay incurred by the traditional method of completing the reel, taking a sample, preparing the wet tensile test (which includes 10 minutes oven cure time) and getting results back from lab, is considerable, so by building a soft sensor, several rolls of paper may be saved.

The Industry 4.0 Future

The final goal might be to take these techniques of statistical modelling and soft sensor models and move to a model predictive control solution. For this solution, we match the data analytics with regulatory control software, such that the corrective actions for process drift in quite complex situations become progressively automated. For our wet ensile example, we could foresee that furnish and broke mix, softwood refining, wet strength chemical addition and other parameters are constantly adjusted to according a predive model based on a soft sensor for wet tensile, with novel process inputs from smart sensor platforms such as wet end charge demand and fibre morphology.


In this article we have looked at how the tissue machine owner might evolve to a future data-based industry 4.0 solution for the most efficient operation of their asset. We discussed how a modular approach, utilising appropriate sensor technology and domain expertise can create value out of data for even a single unit operation. We saw how this could be scald to larger solution, with the application of more sophisticated data analytics, how we could create soft sensors from tis and finally the step to model predictive control solutions. No doubt that ‘big data’ has a bright future in our industry, but maybe it’s OK to start small.

Download this article