Sponsored content brought to you by

securecell logo

Ensuring data integrity—The first step of a long journey

Due to regulatory constraints, and a certain inherent conservatism, the biopharmaceutical industry has been lagging behind other manufacturing sectors in process automation and digitalization. Many companies in the field admit that their primary efforts in digitalization are still devoted to collecting and storing data, rather than using them in a “smart” way to improve productivity or optimize quality. However, the promising potential of IoT and Industry 4.0 workflows has long since made it a pivotal topic, also in smaller biotech R&D departments, research laboratories, and startup companies. To the largest part, this potential is yet to be exploited. From early development stages on, process engineers today want to take advantage of automated and “integrated” methods, such as model-predictive control, soft sensors, ANNs, or digital twins, to just name a few.1,2 The implementation path to robust and operational methods with real added value for the operator, however, can be rugged, especially in dynamic environments, such as R&D departments, where processing is complicated by frequent setup changes and a lower degree of standardization.

An essential requirement for the implementation of such concepts is to attain full data integrity, if possible across all relevant devices, sensors, analysers, and other data sources, in the relative labs or production units.3 The ALCOA+ principles, originally defined by the FDA and endorsed by other regulatory agencies, provides operators with guidelines on what to consider for generating data that can be used effectively and reliably. Or, in other words, how to ensure data integrity: Data needs to be attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available. Hence, the initial task set out to state-of-art laboratory networks and data management and process control systems is quite clearly defined.

Flexibility is key

Experience shows that the first major hurdle to achieving this goal in practice is to have the digital infrastructure keep pace with the complexity and inherent dynamics of modern bioprocessing setups. On the one hand, data volumes acquired are still increasing rapidly, owing to an ever-extending portfolio of online sensors and other PAT devices in use, and a trend to process layouts for continuous operation with longer step-chains to be monitored. On the other hand, the paradigm of agile manufacturing, with smaller possible culture volumes in efficiency-optimized bioprocesses, and with flexible, on-demand use of culturing equipment and associated infrastructure, comes with the need for frequent adaptation and reconfiguration of the technical setup. A bioreactor in development framework may be used in week 1 in perfusion mode, in week 2 for a fed-batch culture with in-line biomass monitoring, and in week 3 in an automated PAT platform for real-time substrate monitoring. In all cases, however, a seamless data flow with full data integrity should be ensured by the overarching control system.4

Most suppliers of cultivation systems, bioprocess analyzers, and peripheral equipment have by now switched from a closed-fleet strategy to an open strategy with variable interfacing options to third-party devices. This again places high demands on central data management system with regard to flexibility, as the ease of implementation is a key determinant for the attainable size and functionality of the integrated system. The importance of platform-independent standards for device integration and communication (e.g., REST, OPC-UA) cannot be over-emphasized, and a user-friendly software design that does not require detailed programming skills for routine configuration tasks is critical.4

Implementing advanced data exploitation routines

From a process engineer’s perspective, the most interesting part of the journey begins only after an integrated system for data storage and processing has been established. “Advanced” control routines can be as diverse as the underlying data sources, and they often need to be specifically tailored to the application or process of interest. Once the raw data have been acquired, subsequent processing steps are usually performed in the overarching control system, such as time-stamp alignment, plausibility checks, and the extraction—sometimes computationally complex—of the relevant process status information needed to direct control functions or trigger events. It is obvious that a potent software should offer flexible options for processing data from multiple input sources, perform basic or advanced pretreatment tasks such as intra- and extrapolation, and ideally offer a gateway to dedicated programming tools for implementation of more elaborate functionalities.2 An important aspect to consider is the real-time capability of coupled data platforms, e.g., to check how the result of a model-based parameter estimation in Python can be re-fed into the process control system.

Looking beyond horizons—Holistic methods spanning multiple unit operations

In the most classic sense, advanced data processing/exploitation/control concepts focus on one specific unit operation, most often in upstream bioprocessing. Typical examples include strategies for feed-on-demand (when a feed pump is controlled based on one or multiple real-time data streams from on-line sensors) and automated event triggering, e.g., phase transitions. In future, we will likely see more data exploitation routines spanning the borders of individual unit operations. The increasing ease of working with data-intense modeling and statistical approaches, supported by dedicated software tools in off-the-shelf, ready-to-use configuration, has greatly facilitated the implementation of advanced data concepts, e.g., correlation-based information mining and control.

Since many bioprocess development projects have to start with relatively little prior knowledge of influencing factors, a holistic view of the entire process with mining for interdependencies can prove highly useful.3 Often, the key parameters for good process performance or adequate product quality can be revealed only upon considering—in parallel—the impact of raw material, media preparation, seed train, bioprocess course, downstream purification, and final quality analysis procedures. For example, the time-dependent degradation of a specific media component may become relevant only if the cell concentration in preculture flasks is slightly higher, or interim storage time slightly longer, with the final impact being visible only in extended upstream lag phases, and, lower product quality. Cross-unit control operations in end-to-end integrated systems will likely become the future gold standard in continuous production settings, where DSP parameters, e.g., chromatography parameters, can be adjusted based on process status information from the respective upstream part, or real-time release concepts be implemented.1

Up to now, such strategies have mainly been reserved for larger-scale, rather static processing chains with standardized protocols for data acquisition. With the advent of fully integrated system architectures in R&D, it will be possible to use cross-unit information mining and control routines earlier, e.g., in the early development phase, with significant impact on accelerating optimization pipelines.6

The added value of automated workflows —Smart ways to informative data

When performing holistic bioprocess analysis, we may reach at the conclusion that the data available to us are just not sufficient to reveal the critical factors or gain control over them in a timely way. One option, then, is to modify our monitoring strategy by increasing sampling frequency or adding further analysis methods or sensors. It is often hard to decide upfront on the value of an enlarged data pool, and it’s clear that a the-more-the-better principle will not apply. It will take some more time until the decision on what to measure, when, and with which accuracy, can be handed over to intelligent control systems.

The added value of more data has to be seen in relation to the effort expended in their acquisition. This is why further automation in monitoring and analytics can act as a catalyst for improving process performance. The fundamentals of PAT call for critical parameters being measured in timely fashion, giving enough room for corrections. Bioprocessing still makes ample use of manual analysis workflows, from sample storage to measurements in stand-alone devices. Even if the analytical device is embedded in an integrated data network, it is of limited use for PAT if timely—automated—sample transfer and processing is lacking.  Many operators, especially in R&D facilities, know the problem of “night” or “weekend” gaps in process data.

Despite the advancements in sensor technology, including soft sensor concepts, some key variables in growing cultures will always remain inaccessible for a fully sensor-based assessment.7 Automation of liquid sampling would deserve more attention as an important enabling technology in bioprocessing but entails challenges that are not as easy to tackle as they might appear at first glance.8 They include hygienic issues (maintaining the sterility barrier), a huge bandwidth with regard to sample consistency (from water to thick biomass slurry), volume restrictions in smaller cultivation systems, and following processing steps that need to happen immediately, e.g., separation of biomass and supernatant.

i2BPLab—Integrated development frameworks for streamlined bioprocess R&D

An ongoing, interdisciplinary research collaboration project of an industrial/academic consortium at the Zurich University of Applied Sciences, supported by the Innosuisse—Swiss Innovation Agency, resulted in a dedicated lab infrastructure (i2BPLab). The backbone of the IoT network, as well as the overarching system for data acquisition, management, and processing routines, is nested in the LUCULLUS® suite of Securecell AG (Urdorf, Switzerland).

A special focus of the i2BPLab ecosystem rests on the implementation of monitoring and PAT strategies. This includes automated sample handover and processing via the NUMERA® platform (Securecell AG) for at-line analysis. Biotechnological production will be among the primary sectors of the manufacturing industries to profit from the faster-than-ever advance of digital technologies, with still huge potential to be gained. Importantly, however, this potential cannot be unlocked by software tools alone, but requires a comprehensive network of integrated devices combined with smart control and data exploitation concepts. In bioprocessing R&D, the era of real IoT or Industry 4.0 is just about to begin.

Lukas Neutsch, PhD ([email protected]), is head of the Bioprocess Technology Research Group at the Institute of Chemistry and Biotechnology, Zurich University of Applied Sciences. He teaches bioprocessing and pharmaceutical technology at different academic institutions in the D-A-CH region and acts as project lead in multiple translational research projects.

Learn more about Securecell, visit our website: www.securecell.ch.

Previous articleVaccine Innovations—Then and Now
Next articleSmooth Collaboration Between CDMOs and Tech Providers Boosts AAV Manufacturing Efficiency