Skip to main content

Innovative trial approaches in immune-mediated inflammatory diseases: current use and future potential

Abstract

Background

Despite progress that has been made in the treatment of many immune-mediated inflammatory diseases (IMIDs), there remains a need for improved treatments. Randomised controlled trials (RCTs) provide the highest form of evidence on the effectiveness of a potential new treatment regimen, but they are extremely expensive and time consuming to conduct. Consequently, much focus has been given in recent years to innovative design and analysis methods that could improve the efficiency of RCTs. In this article, we review the current use and future potential of these methods within the context of IMID trials.

Methods

We provide a review of several innovative methods that would provide utility in IMID research. These include novel study designs (adaptive trials, Sequential Multi-Assignment Randomised Trials, basket, and umbrella trials) and data analysis methodologies (augmented analyses of composite responder endpoints, using high-dimensional biomarker information to stratify patients, and emulation of RCTs from routinely collected data). IMID trials are now well-placed to embrace innovative methods. For example, well-developed statistical frameworks for adaptive trial design are ready for implementation, whilst the growing availability of historical datasets makes the use of Bayesian methods particularly applicable.

To assess whether and how these innovative methods have been used in practice, we conducted a review via PubMed of clinical trials pertaining to any of 51 IMIDs that were published between 2018 and 20 in five high impact factor clinical journals.

Results

Amongst 97 articles included in the review, 19 (19.6%) used an innovative design method, but most of these were relatively straightforward examples of innovative approaches. Only two (2.1%) reported the use of evidence from routinely collected data, cohorts, or biobanks. Eight (9.2%) collected high-dimensional data.

Conclusions

Application of innovative statistical methodology to IMID trials has the potential to greatly improve efficiency, to generalise and extrapolate trial results, and to further personalise treatment strategies. Currently, such methods are infrequently utilised in practice. New research is required to ensure that IMID trials can benefit from the most suitable methods.

Peer Review reports

Background

Immune-mediated inflammatory diseases (IMIDs) consist of many distinct conditions that share common inflammatory pathways. They range in prevalence from more common conditions such as rheumatoid arthritis (0.5–1% prevalence in western populations [1]) and psoriasis (2% prevalence in North America [2]), to much rarer conditions such as Behçet’s disease (estimated 0.005% prevalence in the US [3]). Overall, around 5–7% of the population of western societies has at least one IMID [4], with co-occurrence of multiple IMIDs common [5]. IMIDs are associated with significant, chronic, morbidity affecting quality of life and leading to premature death. As many IMIDs develop later in life, the prevalence is likely to increase as the world population ages.

Despite substantial progress in treatment of IMIDs with newly developed disease-modifying anti-rheumatic drugs and biologics, a substantial proportion of patients fail to respond to treatment or eventually relapse after successful treatment [6]. Consequently, a considerable number of new drugs are in the clinical development pipeline [7] that require demonstration of efficacy and safety. Additionally, with the number of treatments currently available, there is substantial scope for optimising present use through the development of ‘treat-to-target’ approaches [8] and the tailoring of treatment according to patient subgroups [9]. Any such optimised approach also requires demonstration of efficacy and safety, however.

The highest form of evidence is generated by randomised controlled trials (RCTs). For a new drug they provide the most compelling confirmation of benefit over standard therapies. For comparing different treatment optimization strategies, RCTs avoid biases that may occur in an evaluation via a retrospective or prospective observational study. Despite the benefits of RCTs, there are important drawbacks too. RCTs are very expensive to conduct, especially large phase III trials with longer-term follow-up [10]. Accordingly, there has been a strong focus on developing innovative methods for increasing the efficiency of clinical trials. These may have the aim of providing more information from the same number of patients (e.g., by increasing the power to find significant treatment effects), or to reduce the average number of patients recruited to trials without sacrificing power.

In this paper we provide an overview of several innovative methods for increasing the efficiency of clinical trials, framing our discussions within the context of potential benefits to IMID research. We also present a review of recently published IMID trials to investigate how often these approaches have been used in practice.

Overview of innovative methods for immune-mediated inflammatory disease trials

Emulating trials from observational data

Given the large costs associated with prospective RCTs, an important question to consider is whether one is needed to answer a research hypothesis. This question has received particular attention in recent years, given the increasing amount of routinely collected data available, from sources such as CALIBER [11]. Furthermore, there are now an array of patient cohorts and registries, with IMID-Bio-UK [12] an example of a UK initiative to bring these together for various IMIDs.

These data sources allow comparisons of different treatment strategies to be conducted through retrospective observational studies. Results from such analyses can be valuable, but are subject to confounding and other flaws such as selection bias and immortal-time bias [13]. This is especially true if inappropriate analyses are applied.

An example, from outside of IMIDs, of where inappropriate analyses gave a misleading answer is presented by Dickerman et al. [14]. The effect of statins on the risk of developing cancer was assessed from retrospective data by comparing individuals who had received multiple years of statin therapy against those who had not. Even after adjustment for potential confounders, this approach was severely biased: a consequence of the fact that individuals who received multiple years of statin therapy could not have done so if they had died from cancer before or during that time. Within IMIDs, a recent paper [15] reviewed retrospective comparative effectiveness evaluations in rheumatoid arthritis; it was found most analyses had some flaws that would potentially lead to biases.

Instead, an approach called emulation of a target trial [16] can address many biases and result in more reliable answers. This involves specifying the ‘target trial’ that one would have liked to have done (i.e., which patient population, intervention, comparator, and outcomes) and analysing the data in a way that emulates this as closely as possible. Each timepoint in the retrospective data is then examined to identify which patients would have been eligible for randomisation in the target trial. The probability that they could have received intervention or comparator is modelled in a way that emulates random assignment from a trial as closely as possible. Dickerman et al. [14] demonstrate how this approach, applied to data from CALLIBER, yields the same conclusions as a large meta-analysis of RCTs for the (lack of) effect of statins on reducing risk of cancer.

With many IMIDs being chronic conditions, RCTs are often used to compare different strategies for employing treatments known to be efficacious. Examples may include testing different ‘treat-to-target’ strategies [8] that may employ more aggressive treatment until a measure of disease activity is below a set threshold. When different strategies are already being employed in practice, and frequent measures of disease activity are recorded in routine data, emulation of target trials may be an efficient approach for evaluating different strategies.

It is important to note, however, that target trial emulation is still subject to bias. This is especially true if the routine dataset does not record sufficient information on potential confounding variables (or if there is a lot of missing data). Consequently, there may still be a need for prospective RCTs of treatment strategies. Nonetheless, target trial emulation could play an important role in prioritising which strategies should be tested and whether an RCT is likely to be successful in finding a significant effect.

Adaptive trial designs

An adaptive design is one “that offers pre-planned opportunities to use accumulating trial data to modify aspects of an ongoing trial while preserving the validity and integrity of that trial” [17]. Adaptive designs consist of a wide range of approaches that can improve efficiency in trials. Unlike the other innovative methodologies we discuss here, they have been discussed at length in other recent articles. There are both papers that have provided an overview of adaptive designs in general [18] and for specific clinical areas such as rheumatology [19]. We refer the reader to these articles for a comprehension introduction to adaptive designs.

However, we do provide in Table 1 a brief summary of several available types of adaptation and their potential advantages. We also highlight one key factor that influences the added efficiency provided by an adaptive design: the ratio between the recruitment length of the trial and the time taken to observe the primary endpoint [20]. If it takes a long time to observe the primary endpoint, then at an interim analysis there will be a proportion of patients who do not contribute information and who don’t benefit from an adaption. As an example, if the primary outcome takes 1 year to observe and all patients are recruited in 6 months, then by the time the first patient’s one-year outcome has been observed, all patients have been recruited and the adaptive design cannot provide any utility. A more quickly observed ‘intermediate’ outcome can be used to make adaptations, but it must be sufficiently informative for the primary outcome to be useful.

Table 1 An overview of various types of adaptive design and their benefits

Given the amount of well-developed methodology now available for adaptive trial design, it is this consideration on the choice of primary outcome and its observation time relative to the anticipated recruitment rate, which we believe may principally influence whether an adaptive approach would provide efficiency advantages for a given IMID trial.

Basket and umbrella trial designs

Because of rapid advancements in biological and genomic understanding during the past few decades, an increasing number of new therapies are being formulated to target specific molecular or immune aberrations. Given that many IMIDs share common mechanisms, these targeted therapies may perform equally well for multiple distinct IMIDs.

Originating in oncology settings, basket and umbrella trial designs have recently emerged as new types of efficient approaches for testing treatment efficacy in potentially heterogeneous subgroups [21]. These novel designs are administratively efficient as they investigate multiple treatments or diseases, sometimes both, in a single study under an overarching protocol. Figure 1 gives conceptual illustrations of basket and umbrella trial designs with components (sub-studies) defined by biomarkers or genetic mutations, to which the new treatment(s) for evaluation are matched.

Fig. 1
figure1

Illustrations of umbrella and basket trial designs, with the sub-studies evaluating the new treatment(s) that are matched by the pre-defined biomarker(s) or genetic mutation(s)

While traditional oncology trials focus on a single treatment for a specific cancer histology, basket trials can involve multiple histologies and enrol patients with a common mutation that the new therapy targets. As shown in Fig. 1, an oncology basket trial consists of a number of sub-studies, with each specific to a histology or disease subtype. The prinical aim is to test the treatment efficacy in various sub-studies simultaneously. As examples, Drilon et al. [22] evaluated the efficacy of Larotrectinib, a tropomyosin receptor kinase inhibitor, in diverse TRK fusion positive tumours. Hyman et al. [23] evaluated the BRAF inhibitor vemurafenib, finding significant activity in some tumours (e.g., non-small cell lung carcinoma (NSCLC) and Erdheim-Chester disease), yet inactivity in pancreatic cancer and multiple myeloma.

Efforts have been made to translate the idea of basket designs to disease areas outside of oncology. For example, patients can be stratified to enter a trial with multiple sub-studies by biological characteristics, such as disease stage, number of prior therapies, specific genetic/epigenetic changes, or demographic characteristics [24]. There is also precedent for a basket-type approach having been used in IMID research. Although not officially labelled a basket trial, TRANSREG [25] is a multicentre open-label trial involving 11 IMID patient subgroups evaluating the safety, biological and clinical effects of low-dose interleukin-2. The broad eligibility criteria allow patients with rare IMID diseases to participate in the trial.

Early strategies for analysing basket trials regard the sub-studies in isolation. Although this fully acknowledges the heterogeneity between responses to the same treatment observed in the various patient subgroups, this inevitably leads to low-powered tests due to small sample sizes. Several sophisticated approaches have been developed to enable sharing of information across sub-studies [26,27,28,29], among which the proposal by Zheng and Wason [26] can be readily applied to non-oncology basket trials with covariates. With necessary extension or modification, these approaches could lead to the efficient design and analysis of IMID basket trials.

By contrast, umbrella designs, illustrated in Fig. 1, offer the possibility to efficiently test multiple targeted therapies in a single disease population [24]. To date, umbrella designs have only been implemented in oncology [30]: patients of the same tumour type, as screened by an array of biomarkers, receive the treatment specific to their genetic aberration. The ongoing ALCHEMIST trial [31] represents an early example of an umbrella trial. It enrols NSCLC patients and evaluates therapies targeting two types of genetic changes, EGFR mutations and ALK translocations, which are hypothesised as key factors to tumour growth and disease progression.

The increased understanding in pharmacogenomics and pharmacogenetics of IMIDs, especially rheumatoid arthritis [9, 32], makes umbrella designs a suitable approach to answering more treatment-related questions efficiently in a single trial. The identification of specific genes and epigenetic changes involved in the development of rheumatoid arthritis, which may be predictive of the response to treatment, could potentially lead to the initiation of an umbrella trial.

With the multi-biomarker approach of umbrella trials, more patients are likely to meet eligibility criteria for at least one of the biomarker-defined subgroups. This is particularly beneficial compared to an alternative ‘enrichment’ trial that tests one targeted treatment in a subgroup. However, there are unresolved issues in how best to allocate patients who test positive for more than one biomarker, or to no biomarker, in an umbrella trial. Allocating the most suitable treatment to such patients is not straightforward.

Umbrella designs are flexible and can possibly be integrated with various adaptive designs to make them more efficient. Biomarker adaptive randomization could be incorporated to assign patients to the most promising biomarker-linked treatments using accruing trial data (e.g., as in the recent BATTLE trials [33]); a MAMS type approach could be used when a number of treatments are available for evaluation within a cohort; and if promising treatments unavailable at the start of the trial become available, protocol amendments could be made to allow addition of trial arms.

Ultimately, both basket and umbrella designs allow investigators to test more research questions in the same trial. Basket trials help assess whether a new therapy works in distinct patient subgroups (or related diseases) and to what extent [34], while umbrella trials identify whether biomarker-treatment pairs are valid and which one(s) can best improve outcomes.

Sequential multiple assignment randomised trial (SMART) designs

Therapy of chronic conditions or rapidly fatal diseases often requires several lines of treatment with different drugs or interventions used as the disease progresses. In each line, the treatment may achieve the required clinical objective (e.g., response), or not (e.g., non-response). When treatment fails for a patient at a certain line, it is common medical practice to switch to a different treatment or strategy for the next line. The type or dose of the treatment/intervention may be adjusted repeatedly according to a patient’s ongoing clinical information, including their treatment history and response to previous treatments [35, 36].

An adaptive intervention is a treatment strategy that personalises treatment through established decision rules that recommend when and how the treatment changes, taking into account the history of previous treatments and response to those treatments [37]. A Sequential Multiple Assignment Randomised Trial (SMART) is a multistage trial design that is used to construct effective dynamic treatment regimens (DTR), also known as adaptive interventions (AIs) or adaptive treatment strategies [38]. Figure 2 depicts an example of a SMART design in which only non-responders to first stage intervention are re-randomised in the second stage. This would provide information to inform an AI that chooses which first-line intervention to use, and how to subsequently treat patients who do not respond to the first-line treatment.

Fig. 2
figure2

An example SMART design. Only non-responders to the initial treatment are re-randomised in the second stage. R = randomisation

An AI consists of four key elements: critical decision point(s), intervention component(s), tailoring variable(s), and decision rule(s). The first element, a sequence of critical decision point(s), comprises the intervention to begin with, when and how to measure signs of response/nonresponse, how to maintain the success of the initial intervention, and what interventions may be used for non-responders. The second element, the intervention components, is a set of intervention/treatment options at each critical decision point. From Fig. 2 we can see that there are two treatments options in the first stage (treatment A and B), and six treatment options in the second stage (two options for responders, and four options for no-responders). The third element is the tailoring variable(s). A tailoring variable is an early indicator of the overall outcome (success or failure of the intervention). The response status at week 24 plays the role of the tailoring variable in the example shown in Fig. 2. Lastly, the decision rules occurring at each critical decision point link the tailoring variable(s) to the intervention components. Each stage in a SMART corresponds to one of the critical decisions involved in the adaptive intervention. Each participant moves through the multiple stages, and at each stage the participant is randomly (re) assigned to one of several intervention options [35, 39]. Each AI can be summarized in the form (X1;X2:X3) where X1 is the recommended first-stage treatment, X2 the recommended second-stage treatment for responders, and X3 the recommended second-stage treatment for non-responders. There are four different adaptive interventions embedded in the SMART depicted in Figure 2: (A,A,C),(A,A,D),(B,B,E), and (B,B,F).

SMARTs have been used for a wide range of chronic conditions, including some IMIDs. Recent studies that have used them include the CATIE study of treatments for schizophrenia [40], the EXTEND trial of treatments for alcohol dependence [41], and studies of treatments for metastatic renal cell carcinoma [42], depression [43], HIV infection [44, 45], ulcerative colitis [46], autoinflammatory recurrent fever syndromes [47], psoriasis [48,49,50], and rheumatoid arthritis [51].

An alternative design to a SMART study is the use of “multiple one-stage-at-a-time” randomised trials. This design considers each critical decision point as an independent trial [39]. For instance, from the SMART in Figure 2, there are three different “one-stage-at-a-time” trials. The first trial would correspond to the first stage treatment options. The second trial would study treatment in non-responders to treatment A, and the third trial would study treatment in non-responders to treatment B. One advantage of the SMART design over the “multiple one-stage-at-a-time” is that it uses information from all stages to find the best AI. To do this, it uses Q-Learning; a multistage regression method that can use data from a SMART study to examine whether and how certain variables are suitable to develop an AI or improve an existing one [52, 53].

SMARTs are not without limitation, however. In particular, some issues arise from modelling data from SMARTs when the estimation of the optimal AI is of interest. These include model building, missing data, statistical inference, and choosing an outcome when only non-responders are re-randomised [36]. The fact that the re-randomisation depends on the evolving patient status, along with the sequential design nature of the SMART, bring more complexities to the handling of missing data compared to classical clinical trials. For instance, in a SMART study where only non-responders are re-randomised at the second stage, a patient who is lost to follow-up during the first stage will have missing information on their intermediate response status, second stage treatment, and outcome. It is not possible to know whether the information in the second stage is truly missing or is missing by design since it depends on an unobserved patient response status. Furthermore, the use of flexible regression approaches to avoid complex functions in the Q-learning approach can also make it difficult to acquire interpretable results and valid statistical inference due to potential high variability [36].

SMARTs provide a lot of potential utility to chronic IMIDs, where the most suitable AI is of interest.

Use of high-dimensional data to stratify patients: adaptive signature trial designs

It is common in clinical trials that only a subgroup of treated patients may benefit from an experimental therapy [54,55,56,57]. Identifying these subgroups would allow tailoring of treatment, avoiding costly or toxic treatment of individuals who will not benefit. To identify such subgroups, predictive biomarkers are required. Predictive biomarkers are biomarkers (objective characteristics associated with some aspect of a patient’s function or health), measured at baseline, that are associated with the response to treatment. If a predictive biomarker has been identified, this can be used to predict the likely response to treatment. Some clinical areas, such as oncology, have strong availability of predictive biomarkers. For example, the RAS-mutation identified a subgroup of patients with a significant benefit across all efficacy endpoints associated with treatment for colorectal cancer [58].

However, predictive biomarkers are lacking for most IMIDs, meaning predicting response to treatment is more difficult [59,60,61]. For example, in rheumatoid arthritis although genetic variants associated with response to methotrexate have been identified [62,63,64,65], there is a lack of consensus on the predictive utility of these variants.

In the absence of predictive biomarkers, alternative methods that utilise high-dimensional information could be used. With the rapid development of new next generation sequencing, proteomics, and medical imaging technologies, a large amount of high-dimensional data about patients is starting to be collected in clinical trials. There is the potential for this information to be informative for identifying subgroups of patients who are likely to benefit from a new treatment.

To utilize high-dimensional information in RCTs, a method has been developed known as the adaptive signature design (ASD). The aim of the ASD is to allow a single RCT to both test the overall treatment effect in all patients and to form a predictive biomarker signature that predicts a subgroup of patients who strongly benefit from the treatment. Although the ASD has ‘adaptive’ in its name, it is not actually an adaptive design as it does not change anything about the trial.

The original method [66, 67] utilised (high-dimensional) gene expression data in an oncology setting, but it can be used in any case where heterogeneity in the treatment effect is expected and there is high-dimensional information available. Which of the high-dimensional data should be included in the signature is determined by imposing a threshold on the significance level, odds ratios, and number of biomarkers. Further papers have proposed modifications of the original ASD [68,69,70] to provide improved performance (in terms of correctly identifying a subgroup who benefit from treatment). In these methods, the high-dimensional data is used to form a signature that is computed based on the interaction between these data with the treatment. The adaptive signature is represented by a single score for each patient. The scores can then be utilised to divide the patients into subgroups using a variety of clustering techniques, or as covariates in the tests of association with the outcome. The test for the overall comparison between the arms can be performed by testing for the difference between the arms in the trial population (at the significance level α1) and testing for the difference between the arms in the subgroup (at significance level α2). The overall significance level of the trial is then controlled at the α = α1 + α2 level (Fig. 3).

Fig. 3
figure3

Schematic representation of the adaptive signature design

In conclusion, ASDs are a novel methodology that can develop and validate predictive signatures in a single trial. They have the potential to increase the efficiency of clinical trials by finding the group of patients benefiting from particular treatments. However, when the clinical benefit for a subgroup is minimal, a large sample size might be required to detect it with sufficient power. Additionally, the performance of the designs deteriorates if there are many covariates that are not associated with patient benefit. To address this issue, an additional pre-filtering of the covariates might be required. This family of designs may also benefit from exploring different methods of interaction of treatment with high dimensional covariates [71, 72], and from considering multiple trial endpoints [73]. These considerations notwithstanding, ASDs offer a potential route to identifying patient subgroups that will benefit from treatment in IMIDs for which predictive biomarkers are currently lacking.

Composite responder endpoints and augmented analysis methods

Clinical trials specify primary and secondary outcomes that measure how patients respond to a treatment or intervention. The primary outcome should be chosen as a measurement that will be more favourable if the treatment being tested is efficacious or effective. As many IMIDs have complex manifestations and multiple symptoms, it can be difficult to specify a single measurement as being the most important. For this reason, it is common that primary outcomes in IMID trials combine multiple relevant measurements into a single composite outcome. A specific type of composite endpoint is a responder endpoint, which divides patients into responders and non-responders based on different measurements, or components. Some of these components can be binary and others may be whether continuous measurements are above a threshold.

The standard method of analysis for composite responder endpoints is to treat them as binary variables (responder or non-responder). The analysis then estimates the proportion of patients who are responders and whether there is a significant difference between arms: this is done with a suitable binary method such as Fisher’s exact test or logistic regression, amongst many others.

Responder endpoints have the appealing property of summarising very complex information into an easy-to-interpret single quantity. This is also a limitation when applying analysis methods that treat the outcome as binary: much information is discarded, especially from continuous components when dichotomising (see, e.g. [74, 75]) which can lead to a reduction in power [76].

Assuming that the responder endpoint is clinically relevant, there are alternative ways of estimating the proportion of patients who are responders. For endpoints that define response based on a single continuous component, methods were proposed in the 1990s to more precisely estimate the proportion of responders [77, 78]. For composite responder endpoints that are a mixture of continuous and binary components, the augmented binary method has been proposed to provide higher efficiency. This was originally proposed for response criteria endpoints used in phase II oncology trials [79] but has since been extended to endpoints used in IMIDs such as rheumatoid arthritis [80] and systemic lupus erythematosus (SLE) [81]. The method has also been extended to endpoints that are formed from the time until a composite event occurs [82] (e.g., time until relapse, where relapse involves a continuous biomarker being above a certain level), although further work in this area is needed.

The augmented binary method requires no additional data to be collected; it simply fits a more complex statistical model to the data collected on the different components and uses this model to estimate the difference between arms in the proportion of responders (together with a confidence interval and p-value). It has been shown in various papers [80, 81, 83, 84] to provide large gains in efficiency, equivalent to applying the traditional binary analysis with a sample size of 30% or more higher. The extent of the increase of efficiency depends on to what extent the continuous component(s) distinguish between responders and non-responders [85].

A previous review [86] found that several IMID conditions used composite responder outcomes. We show some examples of these in Table 2.

Table 2 Examples of composite responder endpoints used in IMID trials

Current use of innovative methods in immune-mediated inflammatory disease trials

Review methods

To investigate the frequency with which innovative methods have been used in IMID trials in recent years, we searched PubMed on June 182,020. We restricted our evaluation to clinical trial publications that have appeared since 2018 in any of five high impact factor journals relevant to IMIDs (New Engl J Med, Lancet, Ann Rheum Dis, Arthritis Rheumatol, J Am Acad Dermatol). To provide a comprehensive evaluation, we included articles containing any of 51 IMID disease terms. See the Supplementary Materials for the search term. This search returned 160 articles for review.

Each article was reviewed by JMSW to establish whether it met the inclusion criteria: that the article was a primary report of the results of a clinical trial conducted to evaluate the efficacy of one or more treatments for one or more IMIDs. Retrospective trial analyses were thus excluded, as our focus was on how innovative methods have been used in practice in the design and analysis of IMID trials. For each article deemed eligible for inclusion, data was extracted by JMSW for 21 questions relating to the trial’s design and analysis, and in particular the use of innovative methods (see Supplementary Table 1). Owing to the objective nature of the extraction questions, high reproducibility on evaluation of inclusion and subsequent data extraction was anticipated. Nonetheless, ten articles were randomly chosen for duplicate review by MJG. The authors agreed on inclusion for all ten articles. Agreement on extracted data was 95%. See the Supplementary Materials for further details.

Findings

Ninety-seven articles were deemed to be eligible for inclusion. A summary of the extracted data for these 97 articles is given in Table 3.

Table 3 Summary of extracted data for the 97 included articles. The denominator for computing percentages (given to 1 decimal place) is 97 unless stated otherwise

While more than 20 distinct conditions were evaluated in the eligible trials, the plurality (31%) found were in rheumatoid arthritis. Notable numbers were also found in psoriatic arthritis, psoriasis, and SLE. The majority of trials (75%) were funded and sponsored by industry.

Most (65%) eligible trials had two arms. Some rarer conditions used single arm trials with no prospective control arm. In other cases, more than two arms were included: in most instances this was for industry-funded trials of a new drug, with different doses or regimens included as distinct arms. We did not identify any MAMS trials.

There was some reported use of innovative approaches (19.6%). These consisted predominantly of group-sequential designs (or a futility analysis), sample-size re-assessment, and re-randomising some participants as in a SMART design. For re-randomisations, we did not find any examples where an analysis was performed to determine the best AI. The median recruitment length was 96 weeks and primary endpoint length was 24 weeks. This indicates that for a majority of trials the ratio of endpoint length to recruitment length would be sufficiently low for an adaptive design to provide efficiency [20].

In a majority of trials (60%), patients with other autoimmune diseases were not eligible for the trial. In other cases, this was not an explicit exclusion criteria but it is likely that such patients would be indirectly excluded through criteria such as being naïve to therapies that are commonly used for other IMIDs.

We found very few examples where collection of high-dimensional data was reported (8.2%). In the eight trials that did report this, the most common approach was to analyse each variable separately. Reported use of routinely collected data in the design of the trial was also low.

The use of responder endpoints (involving dichotomization of continuous measurements) was very high. The majority of trials (68%) had a primary endpoint that was defined in this way; an even higher proportion (84%) had a responder endpoint as a secondary outcome. These endpoints were routinely analyses using standard methods, such as a Cochrane-Mantel-Haenszel or Fisher’s exact test.

Use of innovative methods in currently ongoing trials

There is often a long lead time between designing a trial and it being reported. We therefore also conducted a scoping review of use of innovative designs in trials that are currently underway. We searched clinicaltrials.gov on 2 February 2021 for studies that were ‘not yet recruiting’, ‘recruiting’, ‘enrolling by invitation’, or ‘active, not recruiting’ that contained any of 51 IMID disease terms and any of 39 terms related to innovative design. A link to conduct this search is given in the Supplementary Materials. It returned 49 studies that were then reviewed by MJG to evaluate evidence of innovative design use.

There were some examples of innovative designs being used. This included multiple group-sequential and seamless phase II/III trials. We also found trials using a Bayesian basket design (NCT04498962), MAMS design (NCT03092674, NCT03805789) and several uses of adaptive randomization (NCT04596293, NCT02269280, NCT02593123). With limited details provided in trial registrations compared to trial publications, it was not possible to extract detailed information and we may well have missed use of innovative approaches.

Discussion

In this paper we have provided an overview of innovative methods that could provide utility to IMID trials. These methods and their advantages are summarized in Table 4. We have also shown that few recently reported trials are utilizing innovative approaches through a literature review.

Table 4 Summary of innovative design and analysis approaches briefed in this paper

Although 19.6% of included trials used some approach that we classified as innovative, most of these were relatively straightforward approaches, such as a futility analysis or having a second randomization of non-responding patients (without applying techniques for analysing SMARTs.) Assessment of current IMID trials listed on clinicaltrials.gov indicates that use of innovative approaches may still be infrequent. There is a high potential for more advanced innovative approaches to be used in future IMID trials, but this requires improved awareness, education, and software.

One notable finding was that it was very common, amongst multiple distinct IMIDs, for trial endpoints to be responder endpoints. Over two-thirds of trials had such an endpoint as the primary, and almost 90% had a secondary endpoint. In every case the endpoint was analysed as if it were binary. As we have described, there are much more efficient analysis methods available and it is important for them to be made available for use in practice. Some freely-available software is currently available [87] but there is the need for more generic software and methods that can be used across all such endpoints used in IMID trials.

Presently, it appears that collection of high dimensional information and use of routinely collected data is rare in IMID trials. A limitation of our review is that we may have missed use of this from just examining primary reports of RCTs. For example it may be common for high-dimensional information to be collected but reported in secondary analysis papers. In addition it may not be felt a worthwhile use of space in a primary report of an RCT to discuss how routinely collected data was used to inform the trial design.

The majority of trials were sponsored and funded by industry. Although there were uses of innovative approaches in industry sponsored trials, use of more advanced methods that we have discussed in this paper could be hampered by regulatory issues (either actual or perceived). For use of some more advanced designs and analysis approaches in confirmatory trial settings, it will be important to ensure they are supported by regulators.

A final important consideration for the potential applicability of the discussed innovative methods is disease prevalence. Some methods we have discussed are particularly relevant in rare disease settings: 1) As composite endpoints are recommended for rare diseases, the augmented analysis methods are more applicable [88]; 2) Basket trials potentially allow borrowing of information, and may thus improve analysis of related rare IMIDs (or for a rare IMID to be tested in conjunction with a common IMID); 3) adaptive designs may be more relevant in rare diseases due to the need to improve efficiency [89] and can be used in single-arm trials, such as the Simon two-stage design [90] that is widely used in phase II cancer trials [91]. Other approaches may not be so applicable in rare settings due to the need for high sample sizes.

In conclusion, IMID trials could substantially benefit from use of more innovative approaches that we have reviewed in this paper. Further research, better software, and more dissemination is needed to ensure all IMID trials, that could benefit, do so.

Availability of data and materials

All data generated or analysed during this study are included in this published article [and its supplementary information files].

Abbreviations

AI:

Adaptive intervention

ASD:

Adaptive signature design

DTR:

Dynamic treatment regime

IMID:

Immune-mediated inflammatory disease

MAMS:

Multi-arm multi-stage

NSCLC:

Non-small cell lung carcinoma

RCT:

Randomised controlled trial

SLE:

Systemic lupus erythematosus

SMART:

Sequential multiple assignment randomised trial

References

  1. 1.

    Alamanos Y, Drosos AA. Epidemiology of adult rheumatoid arthritis. Autoimmun Rev. 2005;4:130–6 Elsevier.

    Article  Google Scholar 

  2. 2.

    Langley RGB, Krueger GG, Griffiths CEM. Psoriasis: epidemiology, clinical features, and quality of life. In: Annals of the rheumatic diseases: BMJ Publishing Group; 2005. p. ii18.

    Google Scholar 

  3. 3.

    Calamia KT, Wilson FC, Icen M, Crowson CS, Gabriel SE, Kremers HM. Epidemiology and clinical characteristics of behcet’s disease in the us: a population-based study. Arthritis Care Res. 2009;61(5):600–4. https://doi.org/10.1002/art.24423.

    Article  Google Scholar 

  4. 4.

    Kuek A, Hazleman BL, K O AJ. Immune-mediated inflammatory diseases (IMIDs) and biologic therapy: a medical revolution. Postgrad Med J. 2007;83:251–60.

    CAS  Article  Google Scholar 

  5. 5.

    El-Gabalawy H, Guenther LC, Bernstein CN. Epidemiology of immune-mediated inflammatory diseases: incidence, prevalence, natural history, and comorbidities. J Rheumatol. 2010;37:2–10.

    Google Scholar 

  6. 6.

    Winthrop KL, Weinblatt ME, Bathon J, Burmester GR, Mease PJ, Crofford L, et al. Unmet need in rheumatology: Reports from the Targeted Therapies meeting 2019. Ann Rheum Dis. 2020;79:88–93 BMJ Publishing Group.

    Article  Google Scholar 

  7. 7.

    Blaess J, Walther J, Gottenberg JE, Sibilia J, Arnaud L, Felten R. AB0332 immunosuppressive and immonomodulating agents in rheumatoid arthritis: a systematic review of clinical trials and their current development stage. Ann Rheum Dis. 2020;79(Suppl 1):1464–5.

    Article  Google Scholar 

  8. 8.

    Solomon DH, Bitton A, Katz JN, Radner H, Brown EM, Fraenkel L. Review: Treat to target in rheumatoid arthritis: fact, fiction, or hypothesis? Arthritis Rheum. 2014;66:775–82 John Wiley and Sons Inc.

    Article  Google Scholar 

  9. 9.

    Ktak A, Paradowska-Gorycka A, Kwiatkowska B, Raciborski F. Personalized medicine in rheumatology. Reumatologia. 2016;54(4):177–86.

    Google Scholar 

  10. 10.

    Sertkaya A, Wong H-H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clin Trials. 2016;13(2):117–26. https://doi.org/10.1177/1740774515625964.

    Article  PubMed  Google Scholar 

  11. 11.

    CALIBER | UCL Institute of Health Informatics - UCL – University College London. Available from: https://www.ucl.ac.uk/health-informatics/caliber. Accessed 16 Apr 2021.

  12. 12.

    Research - Research units A-Z - Immune-Mediated Inflammatory Disease Biobanks - UK. Available from: https://www.gla.ac.uk/research/az/imid/. Accessed 16 Apr 2021.

  13. 13.

    Lévesque LE, Hanley JA, Kezouh A, Suissa S. Problem of immortal time bias in cohort studies: example using statins for preventing progression of diabetes. BMJ. 2010;340(7752):907–11.

    Google Scholar 

  14. 14.

    Dickerman BA, García-Albéniz X, Logan RW, Denaxas S, Hernán MA. Avoidable flaws in observational analyses: an application to statins and cancer. Nat Med. 2019;25(10):1601–6. https://doi.org/10.1038/s41591-019-0597-x.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Zhao SS, Zhao SS, Lyu H, Lyu H, Solomon DH, Solomon DH, et al. Improving rheumatoid arthritis comparative effectiveness research through causal inference principles: systematic review using a target trial emulation framework. Ann Rheum Dis. 2020;79(7):883–90. https://doi.org/10.1136/annrheumdis-2020-217200.

    Article  PubMed  Google Scholar 

  16. 16.

    Hernán MA, Robins JM. Using big data to emulate a target trial when a randomized trial is not available. Am J Epidemiol. 2016;183(8):758–64. https://doi.org/10.1093/aje/kwv254.

    Article  PubMed  PubMed Central  Google Scholar 

  17. 17.

    Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, et al. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ. 2020;369.

  18. 18.

    Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018 Dec 28;16(1):29. https://doi.org/10.1186/s12916-018-1017-7.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Buch MH, Pavitt S, Parmar M, Emery P. Creative trial design in RA: optimizing patient outcomes. Nat Rev Rheumatol. 2013;9:183–94.

    CAS  Article  Google Scholar 

  20. 20.

    Wason JMS, Brocklehurst P, Yap C. When to keep it simple - Adaptive designs are not always useful. BMC Med. 2019;17(1).

  21. 21.

    Renfro LA, Sargent DJ. Statistical controversies in clinical research: basket trials, umbrella trials, and other master protocols: a review and examples. Ann Oncol. 2017;28(1):34–43. https://doi.org/10.1093/annonc/mdw413.

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Drilon A, Laetsch TW, Kummar S, Dubois SG, Lassen UN, Demetri GD, et al. Efficacy of larotrectinib in TRK fusion-positive cancers in adults and children. N Engl J Med. 2018;378(8):731–9. https://doi.org/10.1056/NEJMoa1714448.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Hyman DM, Puzanov I, Subbiah V, Faris JE, Chau I, Blay JY, et al. Vemurafenib in multiple nonmelanoma cancers with BRAF V600 mutations. N Engl J Med. 2015 Aug;373(8):726–36. https://doi.org/10.1056/NEJMoa1502309.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    Woodcock J, LaVange LM. Master protocols to study multiple therapies, multiple diseases, or both. Drazen JM, Harrington DP, McMurray JJV, Ware JH, Woodcock J, editors. N Engl J Med. 2017;377:62–70 Massachusetts Medical Society.

    CAS  Article  Google Scholar 

  25. 25.

    Rosenzwajg M, Lorenzon R, Cacoub P, Pham HP, Pitoiset F, El Soufi K, et al. Immunological and clinical effects of low-dose interleukin-2 across 11 autoimmune diseases in a single, open clinical trial. Ann Rheum Dis. 2019;78(2):209–17. https://doi.org/10.1136/annrheumdis-2018-214229.

    CAS  Article  PubMed  Google Scholar 

  26. 26.

    Zheng H, Wason JMS. Borrowing of information across patient subgroups in a basket trial based on distributional discrepancy. Biostatistics. 2020.

  27. 27.

    Chu Y, Yuan Y. A Bayesian basket trial design using a calibrated Bayesian hierarchical model. Clin Trials. 2018;15(2):149–58. https://doi.org/10.1177/1740774518755122.

    Article  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Hobbs BP, Landin R. Bayesian basket trial design with exchangeability monitoring. Stat Med. 2018;37(25):3557–72. https://doi.org/10.1002/sim.7893.

    Article  PubMed  Google Scholar 

  29. 29.

    Psioda MA, Xu J, Jiang QI, Yang Z, Ibrahim JG. Bayesian adaptive basket trial design using model averaging. Biostatistics. 2019:1–16.

  30. 30.

    Park JJH, Siden E, Zoratti MJ, Dron L, Harari O, Singer J, et al. Systematic review of basket trials, umbrella trials, and platform trials: a landscape analysis of master protocols. Trials. 2019;20(1):572. https://doi.org/10.1186/s13063-019-3664-1.

    Article  PubMed  PubMed Central  Google Scholar 

  31. 31.

    Gerber DE, Oxnard GR, Govindan R. ALCHEMIST: bringing genomic discovery and targeted therapies to early-stage lung cancer. Clin Pharmacol Ther. 2015;97(5):447–50. https://doi.org/10.1002/cpt.91.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  32. 32.

    Umićević Mirkov M, Coenen MJH. Pharmacogenetics of disease-modifying antirheumatic drugs in rheumatoid arthritis: towards personalized medicine. Pharmacogenomics. 2013;14:425–44 Future Medicine Ltd London, UK.

    Article  Google Scholar 

  33. 33.

    Liu S, Lee JJ. An overview of the design and conduct of the BATTLE trials. Chin Clin Oncol. 2015;4(3):1–13.

    Google Scholar 

  34. 34.

    Cunanan KM, Gonen M, Shen R, Hyman DM, Riely GJ, Begg CB, et al. Basket trials in oncology: a trade-off between complexity and efficiency. J Clin Oncol. 2017;35:271–3 American Society of Clinical Oncology.

    Article  Google Scholar 

  35. 35.

    Zhong X, Cheng B, Qian M, Cheung YK. A gate-keeping test for selecting adaptive interventions under general designs of sequential multiple assignment randomized trials. Contemp Clin Trials. 2019;85:105830. https://doi.org/10.1016/j.cct.2019.105830.

    Article  PubMed  PubMed Central  Google Scholar 

  36. 36.

    Zhao YQ, Laber EB. Estimation of optimal dynamic treatment regimes. Clin Trials. 2014;11(4):400–7. https://doi.org/10.1177/1740774514532570.

    Article  PubMed  PubMed Central  Google Scholar 

  37. 37.

    Lavori PW, Dawson R. Adaptive treatment strategies in chronic disease. Annu Rev Med. 2008;59(1):443–53. https://doi.org/10.1146/annurev.med.59.062606.122232.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  38. 38.

    Nahum-Shani I, Ertefaie A, Lu XL, Lynch KG, McKay JR, Oslin DW, et al. A SMART data analysis method for constructing adaptive treatment strategies for substance use disorders. Addiction. 2017;112(5):901–9. https://doi.org/10.1111/add.13743.

    Article  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol. 2012;8(1):21–48. https://doi.org/10.1146/annurev-clinpsy-032511-143152.

    CAS  Article  PubMed  Google Scholar 

  40. 40.

    Stroup TS, McEvoy JP, Swartz MS, Byerly MJ, Glick ID, Canive JM, et al. The national institute of mental health clinical antipsychotic trials of intervention effectiveness (CATIE) project: schizophrenia trial design and protocol development. Schizophr Bull. 2003;29(1):15–31. https://doi.org/10.1093/oxfordjournals.schbul.a006986.

    Article  PubMed  Google Scholar 

  41. 41.

    Managing Alcoholism in People Who Do Not Respond to Naltrexone - Full Text View - ClinicalTrials.gov. Available from: https://clinicaltrials.gov/ct2/show/NCT00115037. Accessed 16 Apr 2021.

  42. 42.

    Sequential Two-agent Assessment in Renal Cell Carcinoma Therapy: The START Trial - Full Text View - ClinicalTrials.gov. Available from: https://clinicaltrials.gov/ct2/show/NCT01217931. Accessed 16 Apr 2021.

  43. 43.

    Schulte PJ, Tsiatis AA, Laber EB, Davidian M. Q- and A-learning methods for estimating optimal dynamic treatment regimes. Stat Sci. 2014;29(4):640–61. https://doi.org/10.1214/13-STS450.

    Article  PubMed  Google Scholar 

  44. 44.

    Moodie EEM, Richardson TS, Stephens DA. Demystifying optimal dynamic treatment regimes. Biometrics. 2007;63(2):447–55. https://doi.org/10.1111/j.1541-0420.2006.00686.x.

    CAS  Article  PubMed  Google Scholar 

  45. 45.

    Cain LE, Robins JM, Lanoy E, Logan R, Costagliola D, Hernán MA. When to start treatment? A systematic approach to the comparison of dynamic regimes using observational data. Int J Biostat. 2010;6(2).

  46. 46.

    Sands BE, Sandborn WJ, Panaccione R, O’Brien CD, Zhang H, Johanns J, et al. Ustekinumab as induction and maintenance therapy for ulcerative colitis. N Engl J Med. 2019;381(13):1201–14. https://doi.org/10.1056/NEJMoa1900750.

    CAS  Article  PubMed  Google Scholar 

  47. 47.

    De Benedetti F, Gattorno M, Anton J, Ben-Chetrit E, Frenkel J, Hoffman HM, et al. Canakinumab for the treatment of autoinflammatory recurrent fever syndromes. N Engl J Med. 2018;378(20):1908–19. https://doi.org/10.1056/NEJMoa1706314.

    Article  PubMed  Google Scholar 

  48. 48.

    Reich K, Gooderham M, Thaçi D, Crowley JJ, Ryan C, Krueger JG, et al. Risankizumab compared with adalimumab in patients with moderate-to-severe plaque psoriasis (IMMvent): a randomised, double-blind, active-comparator-controlled phase 3 trial. Lancet. 2019;394(10198):576–86. https://doi.org/10.1016/S0140-6736(19)30952-3.

    CAS  Article  PubMed  Google Scholar 

  49. 49.

    Mrowietz U, Bachelez H, Burden AD, Rissler M, Sieder C, Orsenigo R, et al. Secukinumab for moderate-to-severe palmoplantar pustular psoriasis: results of the 2PRECISE study. J Am Acad Dermatol. 2019;80(5):1344–52. https://doi.org/10.1016/j.jaad.2019.01.066.

    CAS  Article  PubMed  Google Scholar 

  50. 50.

    Lebwohl M, Blauvelt A, Paul C, Sofen H, Węgłowska J, Piguet V, et al. Certolizumab pegol for the treatment of chronic plaque psoriasis: Results through 48 weeks of a phase 3, multicenter, randomized, double-blind, etanercept- and placebo-controlled study (CIMPACT). J Am Acad Dermatol. 2018;79(2):266–276.e5.

    CAS  Article  Google Scholar 

  51. 51.

    Weinblatt ME, Baranauskaite A, Niebrzydowski J, Dokoupilova E, Zielinska A, Jaworski J, et al. Phase III randomized study of SB5, an adalimumab biosimilar, versus reference adalimumab in patients with moderate-to-severe rheumatoid arthritis. Arthritis Rheum. 2018;70(1):40–8. https://doi.org/10.1002/art.40336.

    CAS  Article  Google Scholar 

  52. 52.

    Chakraborty B, Moodie EEM. Statistical methods for dynamic treatment regimes. New York: Springer New York; 2013. p. 204. (Statistics for Biology and Health)

    Book  Google Scholar 

  53. 53.

    Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, et al. Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods. 2012 Dec;17(4):478–94. https://doi.org/10.1037/a0029373.

    Article  PubMed  PubMed Central  Google Scholar 

  54. 54.

    Rothenberg ML, Carbone DP, Johnson DH. Improving the evaluation of new cancer treatments: challenges and opportunities. Nat Rev Cancer. 2003;3:303–9 European Association for Cardio-Thoracic Surgery.

    CAS  Article  Google Scholar 

  55. 55.

    Foster JC, Taylor JMG, Ruberg SJ. Subgroup identification from randomized clinical trial data. Stat Med. 2011;30(24):2867–80. https://doi.org/10.1002/sim.4322.

    Article  PubMed  Google Scholar 

  56. 56.

    Zhao L, Tian L, Cai T, Claggett B, Wei LJ. Effectively selecting a target population for a future comparative study. J Am Stat Assoc. 2013;108(502):527–39. https://doi.org/10.1080/01621459.2013.770705.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  57. 57.

    Janes H, Brown MD, Crager MR, Miller DP, Barlow WE. Adjusting for covariates in evaluating markers for selecting treatment, with application to guiding chemotherapy for treating estrogen-receptor-positive, node-positive breast cancer. Contemp Clin Trials. 2017;63:30–9. https://doi.org/10.1016/j.cct.2017.08.004.

    Article  PubMed  PubMed Central  Google Scholar 

  58. 58.

    Van Cutsem E, Lenz HJ, Köhne CH, Heinemann V, Tejpar S, Melezínek I, et al. Fluorouracil, leucovorin, and irinotecan plus cetuximab treatment and RAS mutations in colorectal cancer. J Clin Oncol. 2015;33(7):692–700. https://doi.org/10.1200/JCO.2014.59.4812.

    CAS  Article  PubMed  Google Scholar 

  59. 59.

    Brown PM, Pratt AG, Isaacs JD. Mechanism of action of methotrexate in rheumatoid arthritis, and the search for biomarkers. Nat Rev Rheumatol. 2016;12:731–42 Nature Publishing Group.

    CAS  Article  Google Scholar 

  60. 60.

    Robinson WH, Mao R. Biomarkers to guide clinical therapeutics in rheumatology? Curr Opin Rheumatol. 2016;28:168–75 Lippincott Williams and Wilkins.

    CAS  Article  Google Scholar 

  61. 61.

    Pouw J, Leijten E, Radstake T, Boes M. Emerging molecular biomarkers for predicting therapy response in psoriatic arthritis: a review of literature. Clin Immunol. 2020;211:108318. https://doi.org/10.1016/j.clim.2019.108318.

    CAS  Article  PubMed  Google Scholar 

  62. 62.

    Aslibekyan S, Brown EE, Reynolds RJ, Redden DT, Morgan S, Baggott JE, et al. Genetic variants associated with methotrexate efficacy and toxicity in early rheumatoid arthritis: results from the treatment of early aggressive rheumatoid arthritis trial. Pharm J. 2014;14(1):48–53. https://doi.org/10.1038/tpj.2013.11.

    CAS  Article  Google Scholar 

  63. 63.

    Senapati S, Singh S, Das M, Kumar A, Gupta R, Kumar U, et al. Genome-wide analysis of methotrexate pharmacogenomics in rheumatoid arthritis shows multiple novel risk variants and leads for TYMS regulation. Pharmacogenet Genomics. 2014;24(4):211–9. https://doi.org/10.1097/FPC.0000000000000036.

    CAS  Article  PubMed  Google Scholar 

  64. 64.

    Kung TN, Dennis J, Ma Y, Xie G, Bykerk V, Pope J, et al. RFC1 80G>a is a genetic determinant of methotrexate efficacy in rheumatoid arthritis: a human genome epidemiologic review and meta-analysis of observational studies. Arthritis Rheum. 2014;66(5):1111–20. https://doi.org/10.1002/art.38331.

    CAS  Article  Google Scholar 

  65. 65.

    Morgan MD, Al-Shaarawy N, Martin S, Robinson JI, Twigg S, Magdy AA, et al. MTHFR functional genetic variation and methotrexate treatment response in rheumatoid arthritis: a meta-analysis. Pharmacogenomics. 2014;15(4):467–75. https://doi.org/10.2217/pgs.13.235.

    CAS  Article  PubMed  Google Scholar 

  66. 66.

    Simon RM, Freidlin B. Adaptive signature design: an adaptive clinical trial design for generating and prospectively testing a gene expression signature for sensitive patients. Clin Cancer Res. 2005;11(21):7872–8.

    Article  Google Scholar 

  67. 67.

    Freidlin B, Jiang W, Simon R. The cross-validated adaptive signature design. Clin Cancer Res. 2010;16(2):691–8. https://doi.org/10.1158/1078-0432.CCR-09-1357.

    Article  PubMed  Google Scholar 

  68. 68.

    Radmacher MD, McShane LM, Simon R. A paradigm for class prediction using gene expression profiles. J Comput Biol. 2002;9(3):505–11. https://doi.org/10.1089/106652702760138592.

    CAS  Article  PubMed  Google Scholar 

  69. 69.

    Matsui S, Simon R, Qu P, Shaughnessy JD, Barlogie B, Crowley J. Developing and validating continuous genomic signatures in randomized clinical trials for predictive medicine. Clin Cancer Res. 2012;18(21):6065–73. https://doi.org/10.1158/1078-0432.CCR-12-1206.

    Article  PubMed  PubMed Central  Google Scholar 

  70. 70.

    Cherlin S, Wason JMS. Developing and testing high-efficacy patient subgroups within a clinical trial using risk scores. Stat Med. 2020:sim.8665.

  71. 71.

    Callegaro A, Spiessens B, Dizier B, Montoya FU, van Houwelingen HC. Testing interaction between treatment and high-dimensional covariates in randomized clinical trials. Biom J. 2017;59(4):672–84. https://doi.org/10.1002/bimj.201500194.

    Article  PubMed  Google Scholar 

  72. 72.

    Wang J, Patel A, Wason JMS, Newcombe PJ. Two-stage penalized regression screening to detect biomarker–treatment interactions in randomized clinical trials. Biometrics. 2021;(December 2020):1–10.

  73. 73.

    Cherlin S, Wason JMS. Developing a predictive signature for two trial endpoints using the cross-validated risk scores method; 2020.

    Google Scholar 

  74. 74.

    Senn S. Disappointing dichotomies. Pharm Stat. 2003 Oct;2(4):239–40. https://doi.org/10.1002/pst.90.

    Article  Google Scholar 

  75. 75.

    Altman DG, Royston P. The cost of dichotomising continuous variables. BMJ. 2006;332:1080.

    Article  Google Scholar 

  76. 76.

    Wason JMS, Mander AP, Eisen TG. Reducing sample sizes in two-stage phase II cancer trials by using continuous tumour shrinkage end-points. Eur J Cancer. 2011;47(7):983–9. https://doi.org/10.1016/j.ejca.2010.12.007.

    Article  PubMed  Google Scholar 

  77. 77.

    Suissa S. Binary methods for continuous outcomes: a parametric alternative. J Clin Epidemiol. 1991;44(3):241–8. https://doi.org/10.1016/0895-4356(91)90035-8.

    CAS  Article  PubMed  Google Scholar 

  78. 78.

    Suissa S, Blais L. Binary regression with continuous outcomes. Stat Med. 1995;14(3):247–55. https://doi.org/10.1002/sim.4780140303.

    CAS  Article  PubMed  Google Scholar 

  79. 79.

    Wason JMS, Seaman SR. Using continuous data on tumour measurements to improve inference in phase II cancer studies. Stat Med. 2013;32(26):4639–50. https://doi.org/10.1002/sim.5867.

    Article  PubMed  PubMed Central  Google Scholar 

  80. 80.

    Wason JMS, Jenkins M. Improving the power of clinical trials of rheumatoid arthritis by using data on continuous scales when analysing response rates: an application of the augmented binary method. Rheumatol (United Kingdom). 2016.

  81. 81.

    McMenamin M, Barrett JK, Berglind A, Wason JMS. Employing latent variable models to improve efficiency in composite endpoint analysis. Stat Methods Med Res. 2020; e-published.

  82. 82.

    Lin CJ, Wason JMS. Efficient analysis of time-to-event endpoints when the event involves a continuous variable crossing a threshold. J Stat Plan Inference. 2020;208:119–29. https://doi.org/10.1016/j.jspi.2020.02.003.

    Article  PubMed  PubMed Central  Google Scholar 

  83. 83.

    Wason JMS, Seaman SR. Using continuous data on tumour measurements to improve inference in phase II cancer studies. Stat Med. 2013.

  84. 84.

    Lin C-J, Wason JMS. Improving phase II oncology trials using best observed RECIST response as an endpoint by modelling continuous tumour measurements. Stat Med. 2017.

  85. 85.

    McMenamin M, Barrett JK, Berglind A, Wason JMS. Sample size estimation using a latent variable model for mixed outcome co-primary, multiple primary and composite endpoints. 2019;ArXiv:1912.05258.

  86. 86.

    Wason J, McMenamin M, Dodd S. Analysis of responder-based endpoints: improving power through utilising continuous components. Trials. 2020;21(1):427. https://doi.org/10.1186/s13063-020-04353-8.

    Article  PubMed  PubMed Central  Google Scholar 

  87. 87.

    Mcmenamin M, Grayling MJ, Berglind A, Wason JM. Increasing power in the analysis of responder endpoints in rheumatology: a software tutorial. medRxiv. 2020:2020.07.28.20163378.

  88. 88.

    McMenamin M, Berglind A, Wason JMS. Improving the analysis of composite endpoints in rare disease trials. Orphanet J Rare Dis. 2018;13(1):81. https://doi.org/10.1186/s13023-018-0819-1.

    Article  PubMed  PubMed Central  Google Scholar 

  89. 89.

    Hilgers R. Design and analysis of clinical trials for small rare disease populations. J Rare Dis Res Treat. 2016;1(3):53–60. https://doi.org/10.29245/2572-9411/2016/3.1054.

    Article  Google Scholar 

  90. 90.

    Simon R. Optimal two-stage designs for phase II clinical trials. Control Clin Trials. 1989;10(1):1–10. https://doi.org/10.1016/0197-2456(89)90015-9.

    CAS  Article  PubMed  Google Scholar 

  91. 91.

    Grayling MJ, Mander AP. Two-stage single-arm trials are rarely reported adequately. arXiv. 2020.

Download references

Acknowledgements

Not applicable.

Funding

JMSW is funded by the Medical Research Council (MC_UU_00002/6 and MR/N028171/1). Funding bodies played no role in the design, collection, analysis or interpretation of the data.

Author information

Affiliations

Authors

Contributions

JMSW and MJG conceived the idea for the article. JMSW performed the literature review. MJG performed the analysis of the review data. TB drafted the ‘SMART section’ of the manuscript; SC drafted the ‘Adaptive signature trial designs’ section of the manuscript; LO and HZ drafted the ‘Basket and umbrella designs’ part of the manuscript; MJG and JMSW drafted the remaining sections of the manuscript. All authors read contributed to rewriting the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to James M. S. Wason.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Grayling, M.J., Bigirumurame, T., Cherlin, S. et al. Innovative trial approaches in immune-mediated inflammatory diseases: current use and future potential. BMC Rheumatol 5, 21 (2021). https://doi.org/10.1186/s41927-021-00192-5

Download citation

Keywords

  • Adaptive design
  • Basket design
  • Bayesian design
  • Composite endpoint
  • High-dimensional data
  • Routinely collected data
  • SMART trial
  • Umbrella design