IIot and Smart Manufacturing

What is IIoT and Smart Manufacturing

IIoT refers to industrial IoT, or the Industrial Internet of Things. Standard IoT describes a network of interconnected devices that send and receive data to and from each other through the internet.

IIoT and Smart Manufacturing is the usage of connected devices for industrial applications, such as manufacturing and other industrial processes. It involves the use of things such as machine learning and real-time data to optimize industrial processes through a connected network of sensors, actuators, and software. The implementation of IIoT is referred to as Industry 4.0, or the Fourth Industrial Revolution.

Currently, most conventional industrial processes are still using Industry 3.0 practices. However, with the ongoing development and implementation of IIoT across industries, we are trending towards Industry 4.0 – with manufacturing plants being one of the major recipients of this change.

Manufacturing Plant Operational Structure

In order to understand the impact that Industry 4.0 and IIoT and Smart Manufacturing have on manufacturing plants, it is necessary to understand the existing structure that allows a manufacturing plant to operate.

A manufacturing plant has an operational structure of several levels; each of these levels has a certain function and is comprised of equipment, software, or a mixture. This is known as the automation pyramid.

Level 0 is the field level, containing field devices and instruments such as sensors and actuators.

Level 1 is the direct control level, containing PLCs (programmable logic controllers) and HMIs (human-machine interfaces). HMIs display parameter values and allow remote control of devices through stop and start instructions, as well as set point adjustment. HMIs are connected to the PLCs, which are then connected to the field devices.

Level 2 is supervisory control, and contains the SCADA system (supervisory control and data acquisition). The SCADA is a system of software and hardware, and is used for real-time data collection and processing, as well as automatic process control. SCADA collects its data from PLCs and HMIs over communications protocols such as OPC UA and Modbus.

Level 3 is the planning level, containing the MES (manufacturing execution system). The MES is responsible for monitoring and recording the entire production process from raw materials to finished products.  

Level 4 is the management level, containing the ERP system (enterprise resource planning). ERP is responsible for centralizing all of the information within the organization. It’s used to manage accounting, procurement, and the supply chain, among others –  and is more focused on the business aspect rather than the manufacturing aspect.

With an IIoT and Smart Manufacturing system in place, there is an additional layer: the cloud, which is above all the other layers, and implements analytics such as machine learning. the field devices are referred to as edge devices. An edge device has no physical connection to the PLC – it’s instead connected through Wi-Fi. These devices communicate with the PLC over the native protocol, where all the process control is done.

Scenario 1: Optimizing Production and Quality

Conventional Manufacturing – No IIoT (Industry 3.0)

During production, human operators observe the MES system to monitor parameters such as availability, performance, and quality – which are multiplied to give the OEE (overall equipment effectiveness). An OEE of 100% shows perfect production – the goods are manufactured as fast as possible and at the highest quality possible.

If one of the parameters is low, such as the performance (production speed), the operator can instruct the SCADA system to increase the machine speed; this will result in goods being manufactured faster – and a higher performance value.

However, while goods are being produced faster, there also tends to be more waste – so the quality will drop. The operator will have to decide exactly how much to set the machine speed in order to find a good compromise between quality and output. To find the exact balance that maximizes profitability is a difficult task – one which is almost impossible for a human to accomplish.

Smart Manufacturing – Using IIoT (Industry 4.0)

IIoT and Smart Manufacturing enables all of the devices and systems to be able to send and receive information to and from the same place, in real time, without human intervention. This allows the machine learning to make optimal decisions regarding equipment and parameter set points to make the manufacturing process as efficient as possible.

With this system in place, no humans are required to make complex decisions. This results in optimized decisions to be made as quickly as possible – and conditions that result in the greatest profitability for the manufacturing plant.

Scenario 2: Equipment Maintenance

Conventional Manufacturing – No IIoT (Industry 3.0)

The primary method of maintenance is condition monitoring, also known as condition-based maintenance (CbM).

Condition-based maintenance relies on real-time parameters measured by an equipment’s sensors such as temperature, pressure, speed, vibration. Each of these parameters is given a particular range for which the values are acceptable for a given piece of equipment. These parameters are actively monitored, and once a value is measured outside of the acceptable range, maintenance is scheduled.

The issue with condition-based maintenance is that the equipment’s fault is detected after a certain amount of degradation has already taken place. Depending on the rate at which degradation is taking place, this may not leave enough time for timely maintenance to be carried out. The amount of degradation may have also caused damage which is more costly to repair than if it were addressed earlier. The reverse could also be true; a parameter has exceeded a certain boundary, leading to maintenance to be performed immediately. However, there could’ve been a more convenient time, or maybe the machine could’ve carried on running for a considerable amount of time before maintenance being necessary – leading to excessive, unnecessary costs.

Smart Manufacturing – Using IIoT (Industry 4.0)

With IIoT, the method of maintenance can evolve to predictive maintenance (PdM).

Like condition-based maintenance, predictive maintenance also uses sensors to continuously monitor parameters. However, predictive maintenance also continuously collects and analyzes both historical and real-time data using statistical methods and machine learning. Because data trends are being analyzed instead of absolute values, problems can be detected much earlier, and an accurate failure time is determined – allowing maintenance to be scheduled at the most convenient, effective time.

Scenario 3: Adding a New Device

Conventional Manufacturing – No IIoT (Industry 3.0)

Without IIoT, every time a new field device is installed in the plant – such as a pressure transmitter, flowmeter, control valve – it needs to be manually wired into a PLC. Then, its tag needs to be added to the PLC, HMI, OPC server, SCADA, and MES. This is a costly and time-consuming process.

Smart Manufacturing – Using IIoT (Industry 4.0)

When a new device is installed, no complex engineering is required to connect it to the cloud and the existing devices.

The edge devices, PLCs, HMIs, SCADA, MES, ERP, and machine learning all publish their tags and data into the unified namespace – a centralized data repository.

The machine learning allows continuous, real-time collection of data from all of the devices. It can then use this data to run algorithms and publish additional tags into the namespace

Summary

In essence, IIoT and Industry 4.0 allow manufacturing plants to address many of the inefficiencies and solve a lot of the challenges that they face. The use of interconnected sensors and machines, along with free-flowing data enables smarter decisions to be made regarding all aspects of production and operations – leading to reduced downtime, faster production, higher-quality production, and increased profitability.

TQS Integration

TQS Integration is a global technology consulting and digital systems integrator. We provide you with expertise for the digitization of your systems and the digital transformation of your enterprise.

With clients across the pharmaceutical, process manufacturing, oil and gas, and food and beverage industries, we make your data work for you – so you can maximize its potential to make smarter business decisions.

Please contact us for more information.

Checking and reviewing the suite of validation documentation can be very time consuming.  TQS Integration can provide the resources needed to ensure GMP and all regulatory compliance requirements are met. Why not let us do the heavy lifting - providing you with faster end-to-end quality review processes - to ensure speed to value and implementation of your processes and more importantly freeing up your capacity and resources. TQS Integration can help to alleviate these pressures by placing people with the right skills, at the right time, in the right place.

Data Integrity

TQS PI Documentation set will provide evidence that the PI System has been validated.

The expected system lifecycle steps include but not limited to:

"I want to thank you all for all the great work and all your efforts to expedite this important validation activity and everything you are doing in general. Data integrity is important to us ensuring accuracy and compliance. I really appreciate it. Not only can we put our validation work in your trusted hands, but your work has also freed up so much time for us to focus on other areas of the operation."-Top 10 Pharma Company

How we can help you

At TQS, the validation strategy has been developed to systemically test the PI System at different levels for data integrity.  The Installation Qualification (IQ) covers the minimum set of verification to assure proper installation of the software components for the PI System.  Operational Qualification (OQ) will verify the correct operation of the PI system against the user requirements and design specification including those around data acquisition and storage, start up and shutdown and high availability etc.

Dedicated Quality Assurance Engineers will monitor, review, and approve every phase of the process to ensure the implementation and design of the PI System adhere to company standards and regulations.  QA will conduct and participate in every phase of the SDLC including requirements review, design review and test case reviews including test evidence.  Having empirical evidence of the fact that the PI System works as expected ensures a successful outcome during inspections with regulatory organisations, ensuring data integrity is intact.

For information, please contact us.

TQS Pharma Batch

Businesses within the pharmaceutical and life sciences sector must continuously ensure batch quality is maintained at the highest standards. After all, quality is the most critical metric in pharmaceutical manufacturing, nothing is more important than protecting patient health. However, the impact doesn’t go without also reaching bottom-lines and profitability. The numbers speak for themselves: The cost of a single batch deviation can range from $20,000 to $1M per batch, depending on the product.

Tight control of processes, inputs, and other variables is a necessity for successful pharmaceutical manufacturing. Traditionally, there have not been effective ways of looking at historical and time-series data to investigate deviations and variability besides spending painfully tedious hours of subject matter expert (SME) time in spreadsheets. Engineers look to create process parameter profiles to serve as guides for reducing process variability and increasing yield for all future batch development—also known as the “golden profile”.

But there are two problems with this. First, creating golden batch profiles repeatedly requires many hours spent manually sifting through years of data or delayed lab results that make it difficult to optimize process inputs to control the batch yield. And second, out-of-tolerance events will still occur, regardless of applying diligence in controlling the Critical Process Parameters (CPPs) of a recipe, as measured by a group of Critical Quality Attributes (CQAs). Often, it becomes clear the number of variables and the cause-and-effect relationships connecting these two aspects are more complex than originally assumed.

Find Your “Golden Batch”—Efficiently

The data is there. But it’s time to efficiently analyze it. The method of manually extracting production data from historians and various repositories within an industrial control system and creating graphs in Excel is outdated and doesn’t solve the whole puzzle of accurately finding the relationships mentioned above. There are many limitations on how a spreadsheet can actually be applied to understand complex process variability and provide actionable insights. Leading pharmaceutical companies have made the transition to advanced analytics to find their perfect batch parameters.

Applying Advanced Analytics to Make Data-Backed Decisions

The most efficient and intuitive way to lead your team to golden batch discovery and application is through advanced analytics. Applying the technology eliminates all manual work in spreadsheets and automatically cleanses, contextualizes, aggregates, and analyzes your process data in near real-time. It makes the manual connections that your engineers won’t have to—freeing up their time to apply the analysis to your process parameters and production methods to see improvements in quality and performance.

Seeq, the leading provider of advanced analytics, can be scaled applied across your entire organization, running on standard office computers and communicating directly with historians to quickly extract data and present results.

A Behind-the-Scenes Look

To visualize the application in action and for this specific business issue, assume you’re examining a production process with six CPPs connected to a single unit procedure. Using historical data from ideal batches with acceptable specifications on all CQAs, advanced analytics enables you to simply and easily graph these six variables from all the previous unit procedures. Curves representing performance from historical CPPs can then be superimposed on top of each other using identical scales to reveal new insights within the application.

It’s immediately seen if the curves tend to form a tight group, or if they are spread out, showing different values at various times. Seeq can easily aggregate these curves without the need for complex formulas or macros to establish an ideal profile for each CPP. Engineers can replicate this procedure, resulting in an updated reference profile and boundary for every variable. In the end, this process reveals new opportunities for process optimization.

In the screenshot below, Seeq’s advanced analytics is analyzing the cell culture process in an upstream biopharmaceutical manufacturer that is producing Penicillin. The technology is used to create a model for Penicillin concentration based on historical batches to find the CPPs that will produce the ideal batch. This model can then be deployed on future batches with golden profiles for CPPs to effectively track deviations and prevent them from occurring.

Batch Quality

In another example, a leading pharmaceutical manufacturer saved millions of dollars by gaining the ability to rapidly identify and analyze root cause analysis of abnormal batches via similar modeling techniques in Seeq. The team reduced the number of out-of-specification batches by adjusting process parameters during the batch and saved on the reduction of wasted energy and materials.

Additionally, Bristol-Meyers Squibb utilizes modern technologies, including advanced analytics, to capture the specialized knowledge needed to test the uniformity of their column packing processes. Seeq is deployed to rapidly identify the data of interest for conductivity testing to calculate asymmetry, summarize data, and plot the curves for verification by their SMEs. The entire team is empowered to operationalize their analytics by calculating a CPP and distributing it across the entire enterprise, providing reliable and fast insight as to when a column was packed correctly. In turn, this prevents product losses, product quality issues, and even complete losses of a batch.

Developing and deploying an online predictive model of pharmaceutical product quality and yield can additionally aid in fault detection and enable rapid root cause analysis, helping to ensure quality standards are maintained with every batch.

Across multiple use cases, one thing is clear—advanced analytics is the future of trusting batch quality to the highest extent for pharmaceutical and life sciences manufacturing. Combining the latest initiatives in digital transformation, machine learning, and Industry 4.0, it’s the technology that empowers your engineers to their fullest potential in making data-driven decisions to tremendously improve operations.

Applying Advanced Analytics to Your Operation

Are you ready to increase your batch quality and yield by incorporating seamless golden batch development cycles and application with advanced analytics? Make sure to watch this webinar from Seeq for insight on additional ways that advanced analytics can be used to capture knowledge from all parts of the product evolution cycle—from laboratory process design and development through scale-up and commercial manufacturing.

If you’re looking to see the technology live and in action, schedule a demo of the technology here.

Striving for improved sustainability goals with advanced analytics can have multiple benefits to your operation.

In today’s world, virtually no industry is operating without consideration of their impact on the environment. It’s no secret that process manufacturers have been scrutinized for contributing to greenhouse gas emissions and excessive energy consumption, so striving for sustainability can sometimes be seen as a forced hassle. In contrast, however; achieving enhanced sustainability at a process manufacturing organization can actually result in better operational performance and efficiency, saving time and money for the manufacturer. Two birds, one stone.

Defining our Goals: United Nations SDGs

In recent years, the UN created a set of seventeen Sustainable Development Goals (SDGs) to reach by 2030 in an effort to universally protect the planet. Of these are four that pertain particularly to process manufacturing companies and how they can contribute to the global effort:

These four goals have become a guideline for the industry as a whole to craft corporate sustainability goals, and it’s evident that these have become a top priority. Besides the obligations that require companies to invest in sustainable processes, it’s also become known that changing operations to better align with these goals results in many other positive impacts.

Sustainability Doesn’t Happen in Spreadsheets

There are many challenges that process manufacturing teams face today in terms of meeting these sustainability goals. The first stems from the lack of tangible and actionable direction provided to team members at the plant, with broad guidelines set at a corporate level. Subject matter experts (SMEs) often do not have the resources within traditional data technology and methods to analyze their process data and make insight-based decisions to push their operation towards KPIs for improved sustainability.

The reality is that spreadsheets don’t provide any tools for efficiently contextualizing, cleansing, and analyzing data. Many teams spend numerous hours inside of spreadsheets trying to organize data for insight, leaving no time for actually making connections between the data that can lead to a reduction in waste, materials, or money spent.

Additionally, this method does not empower process manufacturers to make reliable predictions based on rapid and historical data. If an environmental violation happens, actions taken to correct it can only happen after it has occurred, and the opportunity to see what caused the problem can be missed.

Enter: Advanced Analytics

With advanced analytics applications, process manufacturing operations can generate compliance reports automatically, with up-to-date data from disparate sources, freeing up time to focus on environmental impact.

Beyond monitoring of data as it’s happening and opening a world of insight during incident investigation, the accessibility, presentation, and correlation of data contributes to effective predictive analytics. This can give teams insight into when unproductive downtime may occur and lead to wasted resources.

Advanced analytics can also provide SMEs with a better understanding of how process changes will affect the environment by reporting on KPIs geared towards specific sustainability measures and creating models to compare process performance and operating conditions to ideal levels.

Beyond this, advanced analytics makes it easy to share insight across an entire team—leaving the days of spreadsheet-sharing in the past. Results are error-free and accessible for the whole organization to maintain the same mindset, so regardless of level, everyone knows the company’s progress towards improved sustainability.

Applying Specific SDG Measures

Here are a few examples of ways that process manufacturers are utilizing advanced analytics to strive towards a better sustainability goals with lower impact on the environment, while also improving their operational performance.

The term “sustainability” can mean a lot of things, such as monitoring and controlling green house gas emissions, optimizing energy efficiency, implementing alternative energy sources, reducing waste and so on. These examples highlight the flexibility of Seeq; wherever you have environmental process data and would like to optimize your environmental performance, Seeq can be used.

SDG 6: Clean Water and Sanitation

Operations can avoid over-cleaning in clean-in-place (CIP) processes where sanitation materials can be unnecessarily used.

SDG 7: Affordable and Clean Energy

Process manufacturers are currently using advanced analytics to develop energy models and decrease total energy consumption, with minimal required capital expense.

SDG 12: Responsible Consumption and Production

Mass balance equations can be run continuously to track historical changes, providing an opportunity to find points where material is wasted.

SDG13: Climate Action

Many organizations are increasing generation of renewables and adopting smart grid technologies to mitigate carbon emissions through advanced analytics. Aggregation of methane emissions from various data sources through the use of advanced analytics can identify or predict places where methane is leaked, down to detailed micro-levels within the operation.

A Sustainable Future

It’s simple: Investing in a sustainability goals strategy is good for business. The efficient use of raw materials, less waste, and lower energy consumption both directly lead to an improved environment and your bottom-line. In addition, sustainable practices such as these can boost your reputation above the competitors in your industry. See how advanced analytics can work for your operation today.

Advanced data analytics is empowering process manufacturing teams across all verticals.

Enhanced accessibility into operational and equipment data has surged a transformation in the process manufacturing industry. Engineers can now see both historical and time-series data from their operation as it’s happening and at remote locations, so entire teams can be up-to-speed continuously and reliably. The only problem with this? Finding their team is “DRIP”—Data rich, information poor.

With tremendous amounts of data, a lack of proper organization, cleansing, and contextualizing only puts process engineers at a standstill. Some chemical environments have 20,000 to 70,000 signals (or sensors), oil refineries can have 100,000, and enterprise sensor data signals can reach millions.

These amounts of data can be overwhelming, but tactfully refining it can lead to greatly advantageous insights. Many SMEs and process engineers’ valuable time is filled with sorting through spreadsheets to try to wrangle the data, and not visualizing and analyzing patterns and models that lead to effective insight. With advanced analytics, process manufacturers can easily see all up-to-date data from disparate sources and make decisions based on the analysis to immediately improve operations.

Moving Up from “Data Janitors”

Moving data from “raw” to ready for analysis should not take up the majority of your subject matter experts’ time. Some organizations in today’s world still report that over 70 percent of their time involved with operational analytics is only dedicated to cleansing their data.

But your team is not “data janitors.” Today’s technology can take care of the monotonous and very time-consuming tasks of accessing, cleansing, and contextualizing data so your team can move straight to benefitting from the insights.

The Difference Between Spreadsheets and Advanced Analytics

For an entire generation, spreadsheets have been the method of choice for analyzing data in the process manufacturing industry. At the moment of analysis, the tool in use needs to enable user input to define critical time periods of interest and relevant context. Spreadsheets have been the way of putting the user in control of data investigation while offering a familiar, albeit cumbersome, path of analysis.

But the downfalls of spreadsheets have become increasingly apparent:

All of these pain points combine to an ultimate difficulty to reconcile and analyze data in the broader business context necessary for profitability and efficiency use cases to improve operational performance.

With advanced analytics, experts in process manufacturing operations on the front lines of configuring data analytics, improvements to the production’s yield, quality, availability, and bottom-lines are readily available.

How It’s Done

Advanced analytics leverages innovations in big data, machine learning, and web technologies to integrate and connect to all process manufacturing data sources and drive business improvement. Some of the capabilities include:

The Impact of Advanced Analytics

Simply put, advanced analytics gives you the whole picture. It draws relationships and correlations between specific data that need to be made in order to improve performance based on accurate and reliable insight. Seeq’s advanced analytics solution is specifically designed for process manufacturing data and has been empowering and saving leading manufacturers time and money upon immediate implementation. Learn more about the application and how it eliminates the need for spreadsheet exhaustion here.

Machine Learning (ML) has seen an exponential growth during the last five years and many analytical platforms have adopted ML technologies to provide packaged solutions to their users. So, why has Machine Learning become mainstream?

Let’s take a look at Technically Multivariate Analysis (MVA). While many algorithms have been widely available for a long time, MVA is still considered a subset of ML algorithms. MVA typically refers to two algorithms:

As such, MVA has become a de facto standard in manufacturing batch processing and others. Some typical use cases are:

In principle, industrial datasets are not different from other supervised or unsupervised learning problems and they can be evaluated using a wide range of algorithms. Multivariate Analysis was preferred because it offered global and local explainability. MVA models are multivariate extensions of the well understood linear regression that provide weights (slope) for each variable. This enables critical understanding and optimization of underlying process dynamics which is a very important aspect in manufacturing.

NEW CHANGES IN INDUSTRIAL MACHINE LEARNING

In the past, many ML algorithms were considered black box models, because the inner mechanics of the model were not transparent to the user. These model types had limited utility in manufacturing since they could not answer the WHY and therefore lacked credibility.

This has very much changed. Today, model explainers in ML are a very active field of research and excellent libraries have become available to analyze the underlying model mechanics of highly complex architectures.

The following shows an example of applying ML technologies to a typical MVA project type. In the original publication (https://journals.sagepub.com/doi/10.1366/0003702021955358 ), several preprocessing steps have been studied together with PLS to build a predictive model. All steps were performed using commercial off the shelf software that manually worked the analysis.

Using ML pipelines, the same study can be structured as follows:

pipeline=Pipeline(steps= [('preprocess', None), ('regression',None)])
preprocessing_options=[{'preprocess': (SNV(),)},
                       {'preprocess': (MSC(),)},
                       {'preprocess': (SavitzkyGolay(9,2,1),)},
                       {'preprocess': (make_pipeline(SNV(),SavitzkyGolay(9,2,1)),)}]

regression_options=[{'regression': (PLSRegression(),), 'regression__n_components': np.arange(1,10)},
                    {'regression': (LinearRegression(),)},
                    {'regression': (xgb.XGBRegressor(objective="reg:squarederror", random_state=42),)}]
param_grid = []
for preprocess in preprocessing_options:
    for regression in regression_options:
        param_grid.append({**preprocess, **regression})
search=GridSearchCV(pipeline,param_grid=param_grid, scoring=score, n_jobs=2,cv=kf_10,refit=False)

This small code example manages to test every combination of prepossessing and regression steps, then automatically select the best model. [A combination of SNV (Standard Normal Variate), 1st derivative and XGBoost showed the highest cross validated explained variance of 0.958].

The transformed spectra and the model weights can be overlaid to provide insights into the model mechanics:

Conclusion

Multivariate Analysis (MVA) has been successfully applied in manufacturing and is here to stay. But there is no doubt that Machine Learning (ML) data engineering concepts will be widely applied to this domain as well. Pipelines and autotuning libraries will ultimately replace the manual work of selecting data transformation, model selection and hyper parameter tuning. New ML algorithms and Deep Learner, in combination with local and global explainer, will expand Manufacturing Intelligence and provide key insights into Process Dynamics.

Special Thanks

Thanks to Dr. Salvador Garcia-Munoz for providing code examples and data sets.

For more information, please contact us.

Detailed equipment & batch data models set up by pharmaceutical and biotech companies have enabled the creation of equipment centric machine learning (ML) models for example, batch evolution monitoring. The next step is to extend the existing equipment centric models and create process or end-to-end models.

The challenge is that the current data models do not fully support the extension:

·        Equipment models are based on the ISA-95 structure and reflect only the physical layout of the manufacturing facilities.

·        Batch Execution Systems (BES) are integrated using ISA-88 and entail only equipment that is controlled by the batch execution system. Often BES systems are set up to execute single unit procedures and subsequent processing steps are executed separately.

·        Management Execution Systems (MES) typically map the entire process and material flow but as a level 3+ system is difficult to integrate into a data modelling pipeline.

·        There are also facilities that use paper-based process tracking instead of MES\BES, which makes traceability even more challenging.

Batch-to-Batch traceability can quickly become very complex especially when many different assets are involved. The following shows an example of a reactor train in a biotech facility:

It shows all the different product pathways from reactor ‘01’ to the final processing step, as an example in red: 01, 11, 22, 33, 44. At any moment in time, the other reactors are either being cleaned or used for a parallel process.

Such a process is difficult to model in a BES or MES system and real time visibility or historical analysis is very challenging. This is especially true if subsequent processing steps are to be included (Chromatography, Fill and Finish, ....)

The missing link to model the different pathways is to integrate each transfer between reactors or equipment. OSIsoft AF offers the AF Transfer model that is fully integrated in the AF system. AF Transfer event can be defined with the out-the-box properties:

·        Source Equipment

·        Destination Equipment

·        Start Time

·        End Time

The AF Transfer model has a lot of the same features that AF event frames offer. Transfers can be templated and through the in-and-outflow ports defined in different granularities.

Once the transfer between equipment has been defined, batches can be traced back in real time with or without using the batch id. This is possible through the equipment and time context of the transfer model:

In this case, starting from the end reactor ‘44’ all previous steps can be retraced by going backwards in time and using the source-destination equipment relationships

The implementation requires a data reference to configure each transfer. The configuration user interface requires the following attributes:

·        Destination Element: Attribute of the destination Element

·        Name: Name of the transfer

·        Optional: Description, Batch Id and Total

The result is transfer logs can be matched up to the corresponding unit procedures by time and equipment context as shown below:

As shown in this example, the end time of transfer log 'Transfer Id S7MZUDGK' matches the start time of unit procedure: "Batch Id WNJ6H99R". The entire pathway can now be reconstructed in one query.

Conclusion

The sequence of discrete processing events such as unit procedures can be modelled using the OSIsoft AF Transfer class. The resulting transfer logs allow retracing the process backwards in time by using the source-destination relationship of the transfer model. Modelling the process flow is key to expanding equipment centric ML models.

Please contact us for more information.

digital twin

Have you ever wondered if it were possible to predict process conditions in manufacturing? Know what is likely to happen before it actually happens in your business processes? Digital Twin might just be your answer.

Benefits:

There are several different definitions of Digital Twins or Clones and many use them interchangeably with terms such as Industry 4.0 or the Industrial Internet of Things (IIOT). Fundamentally, Digital Twins are digital representations of a physical asset, process or product, and they behave similarly to the object they represent. The concept of Digital Clones has been around for some time. Earlier models were based on engineering principles and approximations, however they required very deep domain expertise, were time consuming and were limited to a few use cases.

Today Digital Clones are virtual models that are built entirely by using massive historical datasets and Machine Learning (ML) to extract the underlying dynamics. The data driven approach makes Digital Clones accessible for a wide range of applications. Therefore, the potential for Digital Twins is enormous and includes process enhancements\optimization, equipment life cycle management, energy reductions, safety improvements just to name a few.

Building digital clones require:

1.      A large historical data set or data historian

2.      High data quality and sufficient data granularity

3.      Very fast data access

4.      A large GPU for the model development and real time predictions

5.      A supporting data structure to manage the development, deployment, and maintenance of ML models

The following shows the application of a Digital Twin to a batch process example. The model is built with 30 second interpolated data using a window of past data to predict future (5 min) data points:

So, what’s all the hype of Digital Clones? Well, not only are they able to predict process conditions, they also provide explanatory power on what drives the process - the underlying dynamics. The following dashboard shows a replay of this analysis including the estimate of the model weights:

Conclusion

In summary, the availability of enterprise level data historians and deep learning libraries allow Digital Clones to be implemented on the equipment and process level throughout manufacturing. The technology allows a wide range of applications and offer an insight into the process dynamics that were not previously available, improving data integrity and data access while achieving trust and data transparency with your partners. This helps to digitalize data management and processes to lower risk and improve efficient data sharing with partners.

Please contact us for more information.

Multivariate Analysis (MVA) is a well-established technique to analyse highly correlated process variables. It is well known in batch, but also successfully applied in discrete or continuous processing. In comparison to single variable applications, for example statistical process control, MVA has shown to be superior in the detection of process drifts and upsets. In practice, the implementation of MVA requires two different data structures or models:

Event Frames are usually autogenerated from the batch execution system (BES) and reflect the logical\automation sequences for recipe execution. Both AF Elements and Event Frames are  being used to create MVA models and calculate statistics. Below is an example of a multivariate model that combines the autogenerated Event Frame “Unit Procedure” and process variables in the Element: “Bio Reactor 0”:

This type of analysis is  typically used for batch-to-batch comparison (T2 and speX statistics) and batch evolution monitoring in the pharmaceutical, biotech and chemical Industry.

Challenge

One of the shortcomings of using automation phases is that they  seldom  line up with time frames that are critical for the underlying process evolution (process phases). Often there is a mismatch in the granularity, process phases are either longer or shorter in duration compared to the automation phases. Also start and end might be based on specific process conditions, for example temperature, batch maturity, online measurements and others. The mismatch between automation and process phases causes misalignment in the MVA model and a broadening of the process control envelopes. . The resulting models are often not optimal.

Solution

SEEQ has developed a platform that excels in creating time series segments as well as time series data cleansing and conditioning. The platform provides several different approaches to define very precise start and end condition.  The following show the definition of a new capsule based on a profile search that solely focuses on the process peak temperature:

These capsules can be utilized in other applications through an API and blended with other PI data models to create very precise multivariate models:

Benefits

Multivariate Analysis is a powerful method to analyze highly correlated process data. It depends on  equipment\process models and time series segments. OSIsoft PI provides data models for both. And typically time segments are automatically populated from a BES or MES systems. SEEQ provides new capabilities to create highly precise time segments called capsules, that refines the MVA analysis and creates meaningful process envelops. The integration is seamless since both systems provide powerful API’s to their time series data and models. The resulting MVA models target specific process phases that can be used to create improved process control limits or regression analysis.

Please contact us for more information.

integrate PAT data

Most biotech and pharmaceutical companies are adopting Process Analytical Technology (PAT) in manufacturing to provide real time operational insights that allows better control and leads to higher yields, purity, and\or shorter cycle times.

However, to integrate PAT data into an existing data infrastructure has been challenging. Because each PAT – also a single measurement – is a spectrum consisting of a list or array of values (e.g. time, channel, value) that cannot be stored in a classical data historian.

This graph shows you several spectra forming a multi-dimensional time series:

Spectra are often stored in SQL type databases as plain tables, separate from the other manufacturing data stored in the historian. The main problem with this approach is the loss of equipment and batch context. This is problematic to any subsequent analysis.

So why are spectra stored separately? Because most industrial data historian store values as simple time series in different data types, typically bool, int, float and string. Each time series point is a tuple of a timestamp and a single value (scalar). For PAT and other use cases, it would be required to extend the existing data shapes to accommodate vectors, matrices, and tensors:

There are many use cases for these data structures:

In the OSIsoft Asset Framework, extending the Historian is accomplished by deploying a new source and linking it to a time series database that supports time-based vectors, matrices, and tensors:

The RAMAN spectra are attributes on the unit\vessel or located on the RAMAN equipment. Therefore, extending the existing OSIsoft AF data model allows the measurements to be analyzed in the present batch or event frame context.

The following shows PAT spectral data (RAMAN) for a running batch:

TQS developed an add-in to OSIsoft PI Vision to display raw PAT spectra and also perform peak height calculations on the PAT spectra. This enables process monitoring of sensor data (temperature, pressure, pH, ..) and PAT data side-by-side. This unique capability will lead to a tighter integration of PAT spectra in the manufacturing environment and the ability to easily integrate PAT spectra in multivariate analytics.

Watch video on Combining Spectra & Process Data in OSIsoft PI here.

Conclusion

Classical historians have been developed for scalar time series information. This has worked for most sensor data type, but they cannot accommodate higher dimensional time series information. The solution is to extend the existing historian with databases that allow a more flexible schema. This results in better utilization of existing equipment and batch context that enables context specific analysis.

Please contact us for more information.

© All rights reserved.