Analysis of published results of clinical trials in regenerative medicine

by Alexey Bersenev on May 6, 2017 · 0 comments

in Uncategorized

About two years ago I attempted to analyze published results of regenerative medicine clinical studies. It was very hard and time consuming thing to do, so I’ve ditched my efforts. Recently, I was very happy to find a study by Tania Bubela’s group from University of Alberta, published in Stem Cell Reports. This is the first detailed and the most complete study of published results of clinical trials in the field of regenerative medicine.

The study captures all trials, registered in international databases until end of 2012. The authors allowed 2.5 years time lag from registration to beginning of their analysis in mid-2015. Besides cell-based regenerative medicine trials, search criteria also included agents to (1) stimulate stem/ progenitor cells in vivo and (2) mobilize stem/ progenitor cells for for regenerative purposes. The authors captured many parameters, but the most important were number of publications (reported results), time to publication from trial completion, completeness (quality) of trial reporting and results of published trials.
To me, the most important thing from methodology was to learn how to perform comprehensive search for publications of trial results. It was done by the search of 3 databases: PubMEd, Embase and Google Scholar for –

  • trial ID (for example, NCT #);
  • investigator names, indicated in registry;
  • key words from trial title.

Using this search strategy, the authors identified 357 publications, 74.2% of which linked to trial registration ID.

The study has a lot of interesting data, which you can use to make graphs for your presentations. Here is one graph I made:

bubela_2017

Now, some interesting quotes and my comments.

About publication rate:

Of 1,052 novel trials in our dataset, 393 were completed, 81 were terminated or suspended, 22 were withdrawn, and the remaining 556 were in progress, including trials with unknown status. Of the trials completed, 179 (45.4%) had published results in 205 associated publications with English-language abstracts…

For clinical trials of novel stem cell interventions, a publication rate of 45.5% for completed trials is consistent with other studies of publication rates. However, it remains problematic because the stem cell field combines high patient expectations, patient advocacy, strong political support, and therapeutic promise with regulatory concerns over safety and limited evidence of efficacy…

Interestingly, “publicability” does not increase with progression of trials from Phase 1 to 3/4. Thus, % of publications in Phase 1 trials was 27.7%, in Phase 2 – 27.2% and in Phase 3/4 – 25%.

On trials results:

Safety was reported by 91.2% of publicly funded and 93.0% of industry-funded trials; a higher proportion of industry-funded than publicly funded trials reported positive efficacy (77.2% versus 67.2%); fewer industry-funded than publicly funded trials reported no efficacy (14.0% versus 22.7%); more publications advocating for further or continuing studies reported on industry-funded compared with publicly funded trials (82.4% versus 75.6%).

No surprises here. I was very curious to learn anything about trials failure rate in from phase-to-phase. According to table 2 from the study, positive outcomes of Phase 1 and 1/2 were reported in 125/167 (74.8%) publications, of Phase 2 and 2/3 in 59/93 (63.4%), of 3 and 3/4 in 3/12 (25%). However, difference was insignificant:

No trial characteristic had a significant overall effect on the publication of positive results, although phase III and phase III/IV trials were less likely to report positive results than phase I and phase I/II trials (odds ratio [OR] 0.11; p < 0.004).

It seem to me that “positive outcome” was “as reported” in publications, but not assessed by Bubela’s group. 63.4% of success rate in Phase 2 looks too high to me. Two years ago, I was trying to assess trials outcome myself, since many reports, which contained mixed (some end points were met some were not) or incomplete results, were presented as “positive” by authors in publications. That’s why, I think, these numbers are overestimated. Also, these numbers do not allow us to calculate “failure rate” precisely, since all “incomplete/ mixed/ inconclusive/ failed” could fall under “other than positive”. According to Bubela’s study, trial status has significant effect of the completeness of reporting (I understood that completeness of reporting was worse with progression of trial phase).

On publication bias:

Our result of 67.3% publications reporting positive outcomes is concerning when combined with the early stage of most, and incomplete status of many, novel stem cell clinical trials.

On “unproven stem cell therapies”:

We identified 48 clinical trials with registration numbers on both ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP) from known clinics in North America, Eastern Europe, and Asia that offer unproven stem cell therapies (Table S1). Trials of adipose-derived stem cells or umbilical cord mesenchymal stem cells predominated for a range of conditions in both adult and pediatric participants. Most were recruiting or “enrolling by invitation.” None reported results.

This kind of analysis was performed for the first time. The obvious question here is how did authors identify clinics with “unproven stem cell therapies” in databases? The explanation is in methods:

…we searched our dataset for the names of clinics that provide unproven stem cell therapies identified from the stem cell tourism literature (Li et al., 2014, Master et al., 2014, Master and Resnik, 2011, Levine, 2010, Lau et al., 2008, Turner and Knoepfler, 2016, Goldring et al., 2011, von Tigerstrom, 2008, Sipp and Turner, 2012, Ogbogu et al., 2013)…

It is very very interesting to observe how datasets like Knoepfler/ Turner on “stem cell clinics” became a reference for the other analytical studies.

Overall, this study provides very unique and useful information for all of us in the field. I’d highly recommend you to read it and utilize their data in your work. It will be important to continue to track publications and trial results after 2012 and see how data will evolve over time. Because we cannot calculate “trials failure rate” precisely, based on data from this study, it will be important to perform such analysis in the future.

{ 0 comments… add one now }

Leave a Comment

Previous post: