Development and implementation of a quality control strategy for an

3 days ago - Herein, we propose the use of probabilistic (Bayesian) methods to drive the development of the design space and control strategy for a pr...
0 downloads 0 Views 2MB Size
Subscriber access provided by EDINBURGH UNIVERSITY LIBRARY | @ http://www.lib.ed.ac.uk

Full Paper

Development and implementation of a quality control strategy for an atropisomer impurity grounded in a risk-based probabilistic design space Federico Lora Gonzalez, Jose Tabora, Eric C. Huang, Steven R. Wisniewski, Ronald Carrasquillo-Flores, Thomas M. Razler, and Brendan Mack Org. Process Res. Dev., Just Accepted Manuscript • DOI: 10.1021/acs.oprd.8b00293 • Publication Date (Web): 14 Jan 2019 Downloaded from http://pubs.acs.org on January 15, 2019

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Organic Process Research & Development

Development and implementation of a quality control strategy for an atropisomer impurity grounded in a risk-based probabilistic design space Federico Lora Gonzalez*, Jose E. Tabora, Eric C. Huang, Steven R. Wisniewski, Ronald CarrasquilloFlores, Thomas Razler, Brendan Mack Chemical and Synthetic Development, Bristol-Myers Squibb Company, One Squibb Drive, New Brunswick, New Jersey 08903,United States Keywords: Atropisomer, Quality-by-design (QbD), API-Step, Bruton’s Tyrosine Kinase, Chiral Axis, Bayesian Modeling ABSTRACT: In the development process of an active pharmaceutical ingredient (API), determination of the design space used for control of critical quality attributes (CQAs) is a key component of the quality by design (QbD) framework outlined by the regulatory agencies. Herein, we propose the use of probabilistic (Bayesian) methods to drive the development of the design space and control strategy for a process. Using probabilistic methods to quantify the risk of failure for different processing options allows for informed process design and control strategy decisions, enabling robust processes. We present a case study of a complex API reaction and crystallization: first, probabilistic models are built using lab and plant data. Next, these models are used to compare different processing options in the context of reliability pertaining to a critical quality attribute (CQA). A process decision is outlined based on the reliability estimates from the models, and lastly, a control strategy is proposed for the CQA for a defined reliability specification. This case study highlights the use of probabilistic modeling as a tool for efficient and robust process design in the pharmaceutical industry.

INTRODUCTION Across the pharmaceutical industry, faster timelines, pressure to minimize costs, and increasing competitiveness in the market have resulted in a need for a risk-based approach to drug development. The quality by design (QbD) initiative by the FDA sets a framework by which process understanding, risk assessment and management, and control of critical quality attributes (CQAs) are utilized to ensure pharmaceutical product quality1. Specifically, the QbD guidelines aim to shift the approach to achieving process control from reproducibility and product testing to fundamental process understanding and risk control. The idea of risk analysis and, in general, risk management, is frequently invoked in the QbD regulatory documents and the International Conference on Harmonization (ICH) guidance documents, but there are few specific guidelines on the quantification or estimation of risk2. Part of the QbD strategy is the determination and verification of the design space, which is an integral part of the control strategy for CQAs. The Design Space (DS), defined by the ICH guidelines as the multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality, has been internalized by the pharmaceutical industry and has become a key goal of late phase development. In many instances, the determination of the design space includes mechanistic understanding and modeling, multivariate evaluations of both material inputs and process parameters, their interactions, and fundamental process understanding- knowledge gained during the development process3. Unfortunately the guideline does not provide a quantifiable measure of “assurance” which impairs the application of the quantitative knowledge generated around the process. Specifically, generally applied tools for multivariate analysis and mechanistic models only consider the mean response of a system, and do not explicitly take into account the well accepted fact that the measured process outputs are subject to variability4. Typically, the mean-inference approach does not give a good

estimate on the variability, which can be significant in some processes, and may therefore have a considerable impact to the level of assurance of quality5. A design space may be defined by a mathematical relationship between the quality attributes, the input variables, and the process parameters6. In this case, the design space DS is given by = ∈ : = ( , ∈ (1) where x is the vector of the input variables and process parameters from a possible set of values, E, and Y is the vector of the quality attributes that must meet a set of acceptable values of the quality attributes, S. However, because the unit operation or process model, f, is generally constructed to provide the expected mean value of Y given a mean value of x, using such a model to guide the limit of the design space would result in a 50% reliability at the limit of the design space. Peterson et al. expanded the concept of design space by introducing an elegant probabilistic mathematical representation whereby the design space is defined as7: = ∈ : ( ∈ | , ≥ (2) where R represents a pre-specified value that provides an adequate measure of “assurance” in the definition of a design space. The probability may further be interpreted as a failure rate or a reliability against the target limit for the process output, typically a CQA. In this context, it is necessary to estimate and quantify process variability to predict (future) process capability during process development, which in turn enables the determination of the design space and control strategy for CQAs. Bayesian methodology has been applied to generate probabilistic models of inherently variable processes8. Bayesian methods have the advantage that they provide a direct way to generate a predictive distribution for a process that enables straightforward construction of the design space in (2). In principle, these models may be used to estimate process capability to build a design space

ACS Paragon Plus Environment

Organic Process Research & Development 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

that incorporates variability estimates. Instead of generating a multi-dimensional response surface for the mean expected value of the CQA, the response surface is generated for the failure rate at a particular limit value of the CQA. Depending on the process, the reliability surface within the design space may depend on different aspects of the response. For example, consider a process with low variability of the response, but high sensitivity to input factors (where the mean of Y varies significantly across the parameter apace x). In this case, modeling the mean response is adequate to propose a practical design space, incorporating a sufficient margin near the design space edges. This is equivalent to assuming a constant variability that accounts for the anticipated error. Without incorporating a margin, the failure rate where the mean response is equal to the limit is still 50%. In contrast, in the case of a process with high variability, but low sensitivity to input factors, inferences on the mean response do not provide sufficient knowledge of the risk within the design space. Estimation or quantification of the variability by means of probabilistic models is necessary to understand the process. Furthermore, when multiple CQAs are controlled, joint probabilities, not overlapping means, must be considered to accurately predict process capability5. Variability within a manufacturing process can be broken down into several groups: (1) Process inputs (material quality attributes, charge amounts, temperature, age times, etc.) (2) Process (inherent) (3) Analytical/Sampling Each of these groups can have associated distributions and variability, and can be modeled separately. Philosophically we take the view that the sources of variability effectively make the process a physical random number generator (RNG) with the observed value of the CQA as the random number. The Bayesian framework is implemented to generate a mathematical RNG which can subsequently be used as a more realistic model of the process, incorporating the effect of process parameters on the CQA as well as providing a quantifiable level of variability. The Bayesian framework provides a RNG which is given by the CQA posterior predictive distribution: = ( , , ~ (0,1 (3)

Page 2 of 10

Caryl-N bond, is formed at relatively high quantity (~4-6 mol %) at the end of the reaction, and must be controlled to a limit of 0.8 percent in the solids by the end of the crystallization to meet the release specifications.

Scheme 1: API reaction forming the quinazolinone molecule 2, and the primary atropisomer, 3.

A high level schematic of the process is presented in figure 1. First, the API molecule 2 is formed in the reaction step. The process stream is then concentrated by a distillation, followed by addition of a co-solvent and heating to 35°C. A weak antisolvent is added at 35°C, and the stream is subsequently cooled to 20°C over one hour, inducing a spontaneous crystallization in which some of atropisomer 3 is incorporated into the crystals at levels of 0.8-1.5%. The slurry is then aged with or without wet milling, and heat cycled from 20°C to 35°C for approximately 3-4 cycles at a frequency of 0.3-0.6 cycles/h to promote purification of the solid phase. After the age, a sample of the solids is tested for the main atropisomer, 3, at the in-process control (IPC) point. Once the sample passes the IPC criteria of 0.8 percent 3 in the solids, the stream is further processed downstream to isolate the material. During the development process it was found that after the initial nucleation, aging the slurry lowered the fraction of atropisomer in the solids. Figure 2 shows the initial experimental data collected from three different scenarios aliquoted from the same reaction stream, from the point of nucleation and slurry

where (β,σ) are sampled from their joint posterior distribution, given the data. Once generated, the posterior predictive distribution is sampled to establish process reliability predictions against potential limit values of the CQA (Y). In addition, the input process parameters (x) can be incorporated as additional RNGs to evaluate the impact of the control strategy on the reliability response surface. Herein, we present a case study of using a probabilistic approach to design space development for the production of API, with the focus on selecting between two potential processes and controlling a specific CQA. CASE STUDY: INTRODUCTION The synthesis of the API of a novel Bruton’s Tyrosine Kinase (BTK) inhibitor is a cyclization reaction forming the quinazolinone 2 (scheme 1)9. The only observed impurities in this reaction are the atropisomers of 2, which are controlled as CQAs. The main atropisomer 3, formed upon rotation of the

Figure 1: Schematic for the API process. Each box represents a process, and the arrows into the processes represent input variables used in the causal models.

2 ACS Paragon Plus Environment

Page 3 of 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Organic Process Research & Development The goal of using a probabilistic approach to modeling the process is twofold: first, to make a decision on the use of the wet mill, and to estimate whether the use of the wet mill was warranted based on improvement in predicted process capability. Second, to develop a control strategy for the CQA (percent 3 at the IPC) based on the chosen process and its parameter inputs.

Figure 2: Initial crystallization experiments aliquoted from the same reaction stream: Holding the stream (black circles) lowers the level of 3 in the solids, but heat cycling (red x’s) and wet milling (blue stars) are more effective.

formation (t = 0): slurry held at 20°C (black circles) under mild agitation purged the atropisomer from ~1% to 0.8% over the course of 24h, while the slurry that was heat cycled (ΔT = 15°C, 0.4cycles*h-1, fig 2, red x’s) purged down to 0.6% over the same time. Furthermore, the use of a rotor-stator wet mill increased both the rate and the amount of atropisomer purged (fig 2, blue stars). The change in the impurity level in the solids is attributed to ripening of the crystals, whereby impure material is dissolved and pure material is crystallized, improving the average purity of the solids. Therefore, particle breakage from the wet mill aids in the ripening effect. We hypothesize that entrapment of the atropisomer is the cause of the impurity going into the solid phase, because the solubility of the atropisomer (134mg/mL) is much higher than of the API (27mg/mL) at the initial solvent composition, and the initial nucleation is fast and uncontrolled. Studies regarding the initial process development and understanding solvent, temperature, and entrapment effects on the crystallization are outlined elsewhere10. There are several aspects of this crystallization that make it challenging to meet the 0.8 % IPC limit after the slurry age. First, there is significant variability in the initial nucleation event, leading to a very wide distribution of initial atropisomer level in the slurry. Typically, the initial purge at the crystallization is ~80%, but ranges from 65% to 90%. Seeding the stream was not considered because the form isolated in this step is an intermediate form. This intermediate form has poor stability on storage, so the wet cake is immediately dissolved off the filter cloth and is carried to a final form conversion step. As such, the intermediate form is never dried and stored. Second, there is significant variability in the purge during the slurry age, especially without the use of a wet mill. Numerous studies were conducted to try and determine the root cause of the variability, including solvent composition after the distillation, but no factors were implicated in the large distribution of results. Therefore, a probabilistic modeling approach was used to try to quantify the variability and account for it in the determination of the design space.

CONSTRUCTION OF THE PROBABILISTIC MODELS Construction of the probabilistic model was separated into three different sections: (1) the API reaction, (2) non-milled crystallization, and (3) wet milled crystallization, each corresponding to an individual causal model (figure 1). Mathematically, we treat each of the three stages separately. The causal models for each can be written as: (4) : = ( ; : = ( , ; : (5) : = ( , , ; where y0 is the impurity level (%) in solution, y is the impurity level (%) in the solids, X and Xc are the vectors of input factors (e.g., charge amounts, temperature, etc.) for the reaction and crystallizations, respectively, t is time in hours, Z is a wet milling parameter analogous to time (see API crystallization section), and β are the causal model parameters. After causal models (linear regression models) were built, RNGs were generated for each of the sections. By sampling the RNGs in series, different scenarios (e.g., wet milling versus non-milled) could be compared by calculating failure rates versus the IPC limit. The input factors to the models X and Xc were considered as noise variables. That is, they are well controlled (or well measured) in the lab, but are assumed to not be completely controlled in a manufacturing setting11. As such, when X and Xc are used as inputs to the RNGs for predicting process capability, they are themselves drawn from RNGs, assumed to be normally distributed. A number (10,000) of samples are drawn from the parameter (X and Xc) RNGs, and the posterior for the API reaction RNGs is calculated (y0). Subsequently, that posterior is passed along to the crystallization RNGs, and those posteriors are calculated. The overall failure rate for a specific scenario, F, is then ( ≥ | , , , (6) = where L is the IPC limit (0.8 percent). The failure rate can be used to determine a suitable design space for a chosen level of reliability12. In this work we implemented the statistical package R-Stan13 to generate the RNG for each stage, specifying weak non-informative priors for the model parameters (β and σ)14,15. For the API reaction model, the sampling statement is: ( , (7) ~ where y0 is the response variable, percent of 3, and X is the vector of predictor variables. σ2 corresponds to the variance of the distribution. The following section describes how each of the three causal models were built and verified. API-FORMING REACTION The API reaction, which is a lithium base-catalyzed cyclization reaction, was chosen based on the stereoselectivity of the catalyst10 The lithium species coordinates with 1 in a transition

3 ACS Paragon Plus Environment

Organic Process Research & Development 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

state that promotes the formation of the desired product, 2. Although the selectivity of the reaction has been optimized extensively, a non-negligible level of the main atropisomer, 3, is formed at this stage. The selectivity of the reaction is impacted by the temperature, concentration (volumes), residual water, and residual DMF from the previous reaction. Water reacts with the lithium catalyst to give LiOH, which can promote an unselective reaction. A statistical model was developed based on data gathered from lab experiments. First, a simple fractional factorial design of experiments was used to determine main effects, screening eight factors. The experimental procedure for the reaction involved (a) dissolving 1 into the required MeTHF volume, spiking the desired DMF and water amounts, (b) charging LiOt-bu and MeTHF to the reactor and heating to the desired temperature, then (c) slowly charging the solution of 1 into the pot of LiOt-bu. Samples were drawn at the end of reaction and analyzed by HPLC to measure the response. The factors tested were: DMF content of input stream (mL/g SM), water content of input stream (KF-wt%), input stream concentration (mL/g SM), base equivalents (mol % LiOt-Bu), base solution volumes (mL/g SM), addition times (h), water content of base solution (KF- wt%), and mixing rate (RPM). After selecting significant factors from the initial screen (5), the design was augmented to provide better parameter estimates for the selected factors. The total number of experiments (n = 18) was sufficient to estimate main effects and one interaction effect. The percent of 3 was modeled as a function of DMF input, KF of both the input stream and pot, dilution (volume), and temperature (see table 1). Other factors that were included in the design of experiments were found not to impact the atropisomer level at the end of reaction within the ranges tested, and were therefore excluded from the model. Table 1: Factors included in API reaction model, and the corresponding linear regression and Stan regression fits. The Stan regression columns show the mean ( ) and standard deviation (sβ) of the posterior β and σ distributions. Linear Regression

Stan Regression sβ

Factor

p value

Estimate

Intercept