It's always great to get positive results, but this is very much old news. The two drugs from this particular study, ipilimumab and nivolumab, have been around for a decade, and they've been used in combination to treat various other solid tumors. They really are breakthroughs, which is why James Allison won the 2018 Nobel for discovering ipi and why nivolumab (marketed as Opdivo) is among the top ten selling drugs in the world.
Note that the actual trial referenced by this article, CheckMate-651, failed its primary endpoint [1]. The positive result is only for a subpopulation of patients with high expression of one of the targeted proteins.
Anyway, if you haven't heard about immunooncology, it really is an amazing field with lots of great results. But this particular article about very incremental work.
Someone I know was fortunate enough to get onto one of the clinical trials for ipilimumab - he had Malignant Melanoma that had spread to a nearby organ.
He's still alive 15 years later, despite the survivability for Melanoma that has spread is dire. That's, of course, a single piece of anecdata, but the power of biologicals in how we treat disease going forward cannot be understated.
And includes this paragraph, that I just can't understand:
"The results were not statistically significant, but the immunotherapy combination, designed to spark the immune system into action against cancer, led to a positive trend in survival when compared to the ‘Extreme’ standard in a group of patients - those with tumours that had high levels of an immune marker called PD-L1"
How is this "not statistically significant" but at the same time "...survival rates... were the highest ever reported..." ? I don't get it.
One way of looking at "Statistically significant" is that it's statement about confidence in measurement. Consider a couple scenarios involving unfair coins:
In scenario 1, you have a coin that's weighted towards head with a probability of 75%. You want to confirm that it really is an unfair coin, so you flip it 10 times and get 7 heads. In casual conversation, you or I might say that 75% is significantly more than 50% you get with a normal coin. However the result of your experiment isn't statistically significant; a fair coin could have easily given the same results by chance.
In scenario 2, you have an unfair coin that comes up heads with probability 50.1%. You flip it one billion times, and 501 million of those are heads. Now we have the opposite situation: the difference between 50% and 50.1% is insignificant in most contexts. But the results of this experiment are unambiguous; over 1 billion flips, it would be extremely unlikely for a fair coin to show that large of a skew. So the results of our experiment are statistically significant and we can conclude that the coin really is unfair.
If 100% of patients survive with the treatment, vs only 50% without, that's a high survival rate.
However, if the treatment only went to 5 patients, it's possible that you just happened to get 5 straight patients who would have survived anyway. The probability of that is pretty low (~3%) but you'd need more patients to conclusively prove it.
There are many possible explanations. One is that the trial group broke the word record of survival time (for a trial group in this particular category "terminally-ill head and neck [cancer] patients"), but the control group also broke the word record of survival time (for a control group in this particular category "terminally-ill head and neck [cancer] patients"). So in spite they lived more than usual, the difference with the control group is not very big.
Now we need a second explanation: If my possible explanation is correct, why both groups lived more than usual?
Perhaps a similar study started a short time before them and siphoned the worst case so they get only patients that were slightly better than usual?
Perhaps their definition of "terminally-ill" is less strict than in other groups that make a similar treatment and their patients were slightly better than usual?
Perhaps someone added a vitamin as a standard baseline treatment two years ago, and now everyone gets a few more months of survival?
Perhaps during the pandemic the family had more time to take care of them and that increased the visits to the hospital and that increased the survival time?
I can continue making up excuses, but it's very difficult to know without reading the study, all the similar studies and making a very deep research.
The important conclusion is that without a control group, it's very difficult to be sure what is the baseline and if the new drug (combination) made a difference. Preferably a doble blind randomized control group, not an unrelated bunch of guys in another city.
Are we possibly hitting the fact that no two cancers are the same?
(Warning: following numbers are made up for the sake of argument - IDK how it really looked like in the trial.)
If the drug does nothing to, say, 95 per cent of patients but completely cures the remainin 5 per cent (those who have the "right tumor genome"), taken together the effect may be too small to reach the threshold of statistical significance. But for those five percent, it is a vast difference.
At medschool, a professor destroyed a classmate who uttered this “Despite the lack of statistical significance, these results are clinically meaningful”
You are contradicting yourself by saying that. I would say it is pointless and unprofessional.
You could say "Although the results don't support the hypothesis/approach/treatment, we will keep trying because we want this to work badly"
[+] [-] mrosett|4 years ago|reply
Note that the actual trial referenced by this article, CheckMate-651, failed its primary endpoint [1]. The positive result is only for a subpopulation of patients with high expression of one of the targeted proteins.
Anyway, if you haven't heard about immunooncology, it really is an amazing field with lots of great results. But this particular article about very incremental work.
[1]: https://www.onclive.com/view/nivolumab-ipilimumab-misses-os-...
[+] [-] philjohn|4 years ago|reply
He's still alive 15 years later, despite the survivability for Melanoma that has spread is dire. That's, of course, a single piece of anecdata, but the power of biologicals in how we treat disease going forward cannot be understated.
[+] [-] testfoobar|4 years ago|reply
[+] [-] hikerclimber1|4 years ago|reply
[deleted]
[+] [-] blakesterz|4 years ago|reply
https://www.icr.ac.uk/news-archive/immunotherapy-combination...
And includes this paragraph, that I just can't understand:
"The results were not statistically significant, but the immunotherapy combination, designed to spark the immune system into action against cancer, led to a positive trend in survival when compared to the ‘Extreme’ standard in a group of patients - those with tumours that had high levels of an immune marker called PD-L1"
How is this "not statistically significant" but at the same time "...survival rates... were the highest ever reported..." ? I don't get it.
[+] [-] mrosett|4 years ago|reply
In scenario 1, you have a coin that's weighted towards head with a probability of 75%. You want to confirm that it really is an unfair coin, so you flip it 10 times and get 7 heads. In casual conversation, you or I might say that 75% is significantly more than 50% you get with a normal coin. However the result of your experiment isn't statistically significant; a fair coin could have easily given the same results by chance.
In scenario 2, you have an unfair coin that comes up heads with probability 50.1%. You flip it one billion times, and 501 million of those are heads. Now we have the opposite situation: the difference between 50% and 50.1% is insignificant in most contexts. But the results of this experiment are unambiguous; over 1 billion flips, it would be extremely unlikely for a fair coin to show that large of a skew. So the results of our experiment are statistically significant and we can conclude that the coin really is unfair.
[+] [-] thehappypm|4 years ago|reply
However, if the treatment only went to 5 patients, it's possible that you just happened to get 5 straight patients who would have survived anyway. The probability of that is pretty low (~3%) but you'd need more patients to conclusively prove it.
[+] [-] gus_massa|4 years ago|reply
Now we need a second explanation: If my possible explanation is correct, why both groups lived more than usual?
Perhaps a similar study started a short time before them and siphoned the worst case so they get only patients that were slightly better than usual?
Perhaps their definition of "terminally-ill" is less strict than in other groups that make a similar treatment and their patients were slightly better than usual?
Perhaps someone added a vitamin as a standard baseline treatment two years ago, and now everyone gets a few more months of survival?
Perhaps during the pandemic the family had more time to take care of them and that increased the visits to the hospital and that increased the survival time?
I can continue making up excuses, but it's very difficult to know without reading the study, all the similar studies and making a very deep research.
The important conclusion is that without a control group, it's very difficult to be sure what is the baseline and if the new drug (combination) made a difference. Preferably a doble blind randomized control group, not an unrelated bunch of guys in another city.
[+] [-] BurningFrog|4 years ago|reply
So I think they're saying this was a better result than the standard care, but there is more than 5% probability that was by luck.
[+] [-] ajay-b|4 years ago|reply
[+] [-] inglor_cz|4 years ago|reply
(Warning: following numbers are made up for the sake of argument - IDK how it really looked like in the trial.)
If the drug does nothing to, say, 95 per cent of patients but completely cures the remainin 5 per cent (those who have the "right tumor genome"), taken together the effect may be too small to reach the threshold of statistical significance. But for those five percent, it is a vast difference.
[+] [-] amelius|4 years ago|reply
[+] [-] catlikesshrimp|4 years ago|reply
You are contradicting yourself by saying that. I would say it is pointless and unprofessional.
You could say "Although the results don't support the hypothesis/approach/treatment, we will keep trying because we want this to work badly"
[+] [-] andy_ppp|4 years ago|reply
[+] [-] hikerclimber1|4 years ago|reply
[deleted]
[+] [-] hikerclimber1|4 years ago|reply
[deleted]