Much of our research in breast cancer involves data and analytics; data are collected, and analysis drives our work.

Increasingly, we deploy mathematical models for data analysis and work to publish the results, along with conclusions, to help others learn from our research and apply relevant pieces to their work.

Generally, the results are peer-reviewed before publication in an attempt to ensure we have not missed anything in our analytics or conclusions.

We require our research projects be “open source” in that we require publication and dissemination of the data and the underlying model upon which our analytics and conclusions were based.

Why? Because we want the data and the model to be questioned. We want people poking holes in it to ensure it stands up, as we think it will.

Mistakes happen, and if ignored, or they can make a substantial difference in research findings. Let me share an example: – while reviewing a project proposal from a major research university, we found an embedded error in their formula. IF we had missed that error, the project would have had resulted in the opposite conclusion to the hypothesis they were attempting to prove. The error was subtle; a misplaced bracket in the formula.

This is not standard practice, so I wanted to answer two common questions we get about our process:

Why? Why release data and model formulae?
Simple: We want to be as accurate as possible because the ultimate end is to drive the therapeutics for treating cancer. Human lives are at stake, and we want to approach this process with a degree of urgency. But we want to be right. So, we have as many eyes as possible cast on our work.

Those that do this successfully want diversity of thought, disciplines, and opinions in developing conclusions, particularly those that require a great degree of investment, such as cancer treatment modalities and drugs. Failure to invite critical review, or worse, operating under the belief an expert’s hypothesis is infallible, helps no one.

In cancer research, we work to unravel the “causality” versus “correlation” questions. We work to clearly understand what is causal because that will lead us to the correct path for treatments if not cures. Mistaking correlation for causality leads down the wrong path.

Why publish data that disproves your hypothesis?
In research, the researchers either publish or perish. Researchers benefit from proving a given hypothesis because follow on funding often arrives. We require publication of conclusions either way. Sometimes no is a helpful answer.

Featured news

Targeting effective treatments for triple-negative breast cancer

Targeting effective treatments for triple-negative breast cancer

The JKTG Foundation recently awarded funding to Laura Heiser, Ph.D., Associate Professor and Vice Chair of Biomedical Engineering at the Oregon Health & Science University (OHSU) School of Medicine, to develop a prototype multiscale model designed to predict therapeutic responses of tumor ecosystems – a new frontier in breast cancer research.

read more

Jayne Koskinas Ted Giovanis
Foundation for Health and Policy

PO Box 130
Highland, Maryland 20777

Media contact: 202.548.0133