The quality and safety of a biopharmaceutical is determined by how well its manufacture is controlled. The challenge is that the modeling methods used to develop control strategies sometimes struggle to take variability and “random effects” into account.
Modeling is vital for the development of control strategies in biopharmaceutical manufacturing. The basic idea is to compare real-time production processes against a model representing the same operation running under optimal conditions. Any deviation from the optimized process is detected, allowing for adjustment or correction, either manual or automatic.
The problem for process modelers is that biopharmaceutical production operations are complex and subject to variability, according to Thomas Oberleitner, PhD, a researcher at Chase, a digitized industrial development organization in Vienna, Austria.
“Different raw materials, different seed trains, different operators—all of those introduce random and uncontrollable variance. In practice, sometimes different effects get pooled together as, for example, ‘week-to-week’ batch variability, because it is not known what exactly changed. Usually this is good enough to get an estimate of the variance attributed to random,” says Oberleitner.
But sometimes pooling random effects does not produce accurate models, which has serious implications for manufacturers, Oberleitner adds.
“Random effects can affect whether the predicted, modeled distribution of critical quality attributes falls within specification limits or not, for example whether a batch is safe or effective to use.
“Generally, modeling an effect as fixed or random does not change the predicted mean of the model. However, it can significantly increase the uncertainty around that prediction, shifting the predicted CQA distribution over the specification limits.”
Predicting random effects
When modelers encounter difficulties with random effects, it is usually for one of two reasons; Either they ignore the random effects completely or they model them as fixed parameters. Both approaches are problematic different ways, says Oberleitner.
“For the former, the variance resulting from the ignored random effect moves into the residual variance of the model, which can result in a bad model fit that can’t be trusted for predicting CQA distributions. The latter—modeling the random effect as a fixed effect—results in an underestimation of a predictions uncertainty.”
To try and address these challenges, Oberleitner and colleagues outlined a new approach for incorporating random effects in models in a recently published research paper.
The basic idea, he says, is to use a statistical analysis technique known as “tolerance intervals” to handle potential variability in models used for process control.
“Our proposal boils down to two things: Using models that can incorporate random effects and using conservative measures of uncertainty like tolerance intervals that also consider the random effect,” he explains. “Linear mixed models, for example, give an estimate of both the random and residual variance. The random variance estimator can then be used in the calculation of tolerance intervals, as proposed by Franzq et al. One can use this interval to then define proven acceptable ranges for all manufacturing parameters.”