There should be a requirement that the results be published.
There will be studies that have a null result (the intervention tested had no effect), but even that is useful information, in that it can save everyone else from repeating the same mistake. Might be hard to get those sorts of studies into a peer-reviewed publication, but, even so, writing it up and putting it on the CIHR website would be useful, if only for the academic discipline of having to write up what you did, why you did it, how you looked for an effect, whether or not it worked, and what that means.
As it stands, people are doing research and then passively or actively burying the results, when either “nothing happened” (as above) or “what actually happened wasn’t what we wanted to happen”. In the latter case, the failure to publish/share just reinforces the echo chamber. For example, some researchers “want” trans interventions to “work”, and they won’t publish anything that shows their beliefs aren’t supported by actual results. Hence, you’ll read that there’s nothing published to say trans interventions are harmful or useless. That lack of published evidence is then used as evidence to support continuing to do that which could well be harmful or useless.
Failure to publish this stuff is problematic, either way.
Simple fix. Don’t hand out money for research without requiring a “report” at the end.
There are absolutely reports generated from all of this work. Some good. Some not so good. The vast majority of this work is done by graduate students. Until somewhat recently, much of this would be paper based and would have been difficult to easily capture (though likely could be scanned, but would be a monumental effort and tedious) .
Most of this is now captured using submission portals via CIHR, NSERC, SSHRC. All of them are reviewed to some degree to check for progress, auditing purposes in terms of leveraging dollars and ensuring that contributions are made, number of HQP trained, etc.
Some projects with collaborating industries making cash and in-kind contributions might have some publication restrictions, but definitely not indefinite (in health anyway...some defense related things are more sensitive).
Nice background. Although I would have trouble believing that physical paper was used for even one of the 37k studies that appear in the CIHR dataset: they're all post-2011.
The other issue that might not be evident to people that have not worked in academia or in grant reviews, is that the project awarded often does morph and change over the years, or outputs multiple papers related to the award but not exactly what was set out. Such is the reality of R&D.
Look at the papers published by the research group in the years following the award (those awards pay for grad students, lab supplies, surveys, etc.). These publications, along with the theses of of grad students, would be a portion of the outcomes. Interim reports and final reports submitted to agencies are only overviews of all of that output.
Interim reports could contain some enabling data/disclosures which would mean that it would be prudent to keep that out of public view, and why only the abstracts are made public. Honestly, avoiding too much disclosure in the interim reports would also be wise from an IP protection standpoint.
Submission portals and database management has changed a lot in just the 10 years since I graduated. But even in 2014 much of the reports we prepared were in various word docs and PDFs, and submitted via attachments.
Not everything is some malicious conspiracy. Things can be convoluted simply because of them being complex.
Adding vast archives of unstructured data to integrated CMS's is trivial. I was personally working on related projects a decade ago. But now using genAI tools it's beyond easy. With $16 billion at stake, there's no excuse for obstruction.
Trivial? Time consuming and labor intensive. The biggest cost to creating Gen AI was the labor intensive data labeling. Gen AI also has limits in how this would help going forward (though I agree, I think it could be useful to accelerate the work).
That $16B number is inflated over years. I wish we were putting that much into R&D every year.
Too much oversight and nitpicking is as bad as too little when it comes to outcomes from R&D. We complain about our lackluster innovation outputs and productivity, maybe we get in the way too often. I am a big believer in the Fast Grants approach of putting money to use in innovation development.
Outrage, of course. I drive in traffic and haven’t been arrested for road-rage yet. So I think I am qualified to say you study the road with a precision I admire.
This is very troubling. How does André Pelletier's assertion that "There are absolutely reports generated from all of this work" square with David's finding? I'd be happy to see this further looked into (as a minimum, see a response from CIHR).
That you picked a transgender study for an example is a bit unfortunate because it is a subject that provokes its own biases.
For the larger picture, there should be aggregate data on the 39,000 studies: Status, report, etc.
> For the larger picture, there should be aggregate data on the 39,000 studies: Status, report, etc
There is no such data available: if CIHR tracks results internally, they're not telling us about it. There might be reports on some individual studies out there somewhere, but they're not connected in any way to public-facing CIHR data.
Did you pick that specific topic because of its contentiousness?
Also, this paragraph is eye roll inducing: "As most of the participants appear to have financial and professional interests in the research outcome, I can’t avoid wondering whether there might be at least the appearance of bias."
Researchers in a field absolutely have professional interests in said field...why else would they be dedicating years of study to something?
> Did you pick that specific topic because of its contentiousness?
Would it make any difference if I did?
> Researchers in a field absolutely have professional interests in said field
Sure. But robust research systems should control for the bias. That, for instance, is why proper medical studies declare end points *before* beginning the study and why changing end points in transit is considered a black mark on the study. And that is also exactly why a policy requiring the full publication of the outcomes from all funded research would have a huge impact on the credibility of the academic industry.
If an expensive study that simply buries its results doesn't set off alarm bells, then there's simply no point in having alarm bells.
Proof? This isn't a criminal court. But aren't you at least a little curious about why a study that was reported complete in 2021 and was led by people who love publishing their work would suddenly go dark.
You raise excellent questions. I wonder if some of them are cloaked because of patent or competition issues. However if that’s the case they shouldn’t be using public monies.
I'm sure that many of them are, in fact, kept "underwraps" for proprietary reasons - they're intended to feed new and important industries. I have no problem with government funds being used for that research because of all the potential economic benefits. But the vast majority of funded studies were never expected to be commercialized. And there should be a lot more oversight over those.
There should be a requirement that the results be published.
There will be studies that have a null result (the intervention tested had no effect), but even that is useful information, in that it can save everyone else from repeating the same mistake. Might be hard to get those sorts of studies into a peer-reviewed publication, but, even so, writing it up and putting it on the CIHR website would be useful, if only for the academic discipline of having to write up what you did, why you did it, how you looked for an effect, whether or not it worked, and what that means.
As it stands, people are doing research and then passively or actively burying the results, when either “nothing happened” (as above) or “what actually happened wasn’t what we wanted to happen”. In the latter case, the failure to publish/share just reinforces the echo chamber. For example, some researchers “want” trans interventions to “work”, and they won’t publish anything that shows their beliefs aren’t supported by actual results. Hence, you’ll read that there’s nothing published to say trans interventions are harmful or useless. That lack of published evidence is then used as evidence to support continuing to do that which could well be harmful or useless.
Failure to publish this stuff is problematic, either way.
Simple fix. Don’t hand out money for research without requiring a “report” at the end.
There are absolutely reports generated from all of this work. Some good. Some not so good. The vast majority of this work is done by graduate students. Until somewhat recently, much of this would be paper based and would have been difficult to easily capture (though likely could be scanned, but would be a monumental effort and tedious) .
Most of this is now captured using submission portals via CIHR, NSERC, SSHRC. All of them are reviewed to some degree to check for progress, auditing purposes in terms of leveraging dollars and ensuring that contributions are made, number of HQP trained, etc.
Some projects with collaborating industries making cash and in-kind contributions might have some publication restrictions, but definitely not indefinite (in health anyway...some defense related things are more sensitive).
Nice background. Although I would have trouble believing that physical paper was used for even one of the 37k studies that appear in the CIHR dataset: they're all post-2011.
The other issue that might not be evident to people that have not worked in academia or in grant reviews, is that the project awarded often does morph and change over the years, or outputs multiple papers related to the award but not exactly what was set out. Such is the reality of R&D.
Look at the papers published by the research group in the years following the award (those awards pay for grad students, lab supplies, surveys, etc.). These publications, along with the theses of of grad students, would be a portion of the outcomes. Interim reports and final reports submitted to agencies are only overviews of all of that output.
Interim reports could contain some enabling data/disclosures which would mean that it would be prudent to keep that out of public view, and why only the abstracts are made public. Honestly, avoiding too much disclosure in the interim reports would also be wise from an IP protection standpoint.
Submission portals and database management has changed a lot in just the 10 years since I graduated. But even in 2014 much of the reports we prepared were in various word docs and PDFs, and submitted via attachments.
Not everything is some malicious conspiracy. Things can be convoluted simply because of them being complex.
Adding vast archives of unstructured data to integrated CMS's is trivial. I was personally working on related projects a decade ago. But now using genAI tools it's beyond easy. With $16 billion at stake, there's no excuse for obstruction.
Trivial? Time consuming and labor intensive. The biggest cost to creating Gen AI was the labor intensive data labeling. Gen AI also has limits in how this would help going forward (though I agree, I think it could be useful to accelerate the work).
That $16B number is inflated over years. I wish we were putting that much into R&D every year.
Too much oversight and nitpicking is as bad as too little when it comes to outcomes from R&D. We complain about our lackluster innovation outputs and productivity, maybe we get in the way too often. I am a big believer in the Fast Grants approach of putting money to use in innovation development.
Outrage, of course. I drive in traffic and haven’t been arrested for road-rage yet. So I think I am qualified to say you study the road with a precision I admire.
This is very troubling. How does André Pelletier's assertion that "There are absolutely reports generated from all of this work" square with David's finding? I'd be happy to see this further looked into (as a minimum, see a response from CIHR).
That you picked a transgender study for an example is a bit unfortunate because it is a subject that provokes its own biases.
For the larger picture, there should be aggregate data on the 39,000 studies: Status, report, etc.
> For the larger picture, there should be aggregate data on the 39,000 studies: Status, report, etc
There is no such data available: if CIHR tracks results internally, they're not telling us about it. There might be reports on some individual studies out there somewhere, but they're not connected in any way to public-facing CIHR data.
Did you pick that specific topic because of its contentiousness?
Also, this paragraph is eye roll inducing: "As most of the participants appear to have financial and professional interests in the research outcome, I can’t avoid wondering whether there might be at least the appearance of bias."
Researchers in a field absolutely have professional interests in said field...why else would they be dedicating years of study to something?
I wish people would actually work in academic research, instead of looking for conspiracies. https://www.theverge.com/2024/10/25/24279491/everything-is-a-conspiracy-theory-when-you-dont-know-how-anything-works
> Did you pick that specific topic because of its contentiousness?
Would it make any difference if I did?
> Researchers in a field absolutely have professional interests in said field
Sure. But robust research systems should control for the bias. That, for instance, is why proper medical studies declare end points *before* beginning the study and why changing end points in transit is considered a black mark on the study. And that is also exactly why a policy requiring the full publication of the outcomes from all funded research would have a huge impact on the credibility of the academic industry.
If an expensive study that simply buries its results doesn't set off alarm bells, then there's simply no point in having alarm bells.
And you assume that the results are buried. Any actual proof of such?
Proof? This isn't a criminal court. But aren't you at least a little curious about why a study that was reported complete in 2021 and was led by people who love publishing their work would suddenly go dark.
Curious? A little. But I also think this particular case is a culture war issue more than a systemic issue with regards to science funding.
It wouldn't make a difference as much as it points to your own biases.
You raise excellent questions. I wonder if some of them are cloaked because of patent or competition issues. However if that’s the case they shouldn’t be using public monies.
I'm sure that many of them are, in fact, kept "underwraps" for proprietary reasons - they're intended to feed new and important industries. I have no problem with government funds being used for that research because of all the potential economic benefits. But the vast majority of funded studies were never expected to be commercialized. And there should be a lot more oversight over those.
There is oversight. Might not be public, but the people operating these programs do review grant applications, progress, and audit financials.
Allowing for more public disclosure might be useful, and would be much easier with today's technologies.
I hope you're correct. And I agree that at the very least, the audits should be public.
https://www.nserc-crsng.gc.ca/NSERC-CRSNG/Reports-Rapports/audits-verifications/index_eng.asp
Overall programs are audited. I've never worked for the tri-council but I am sure some of the audits involve probing specific project controls.
Perhaps the Auditor General might be persuaded to add this to the long list . . .
Well you're certainly welcome to file a "Concern or allegation" through the Auditor General's office - https://www.oag-bvg.gc.ca/internet/English/web-inquiry.html
Thank you! I was unaware of the mechanism by which a citizen might do that.