“Zeroes and ones will take us there!”
– Jesus Jones (1993)

I’ve always loved the data part of strategy.
But I’ve also learned to respect its flaws.

In clinical research, we use data to understand how and where patients have been recruited before, and to predict what might happen next.
Some swear by the mean, others the median.
[And yes, there’s still the mode—but I only accept Depeche.]

The truth? There is no “right way”
Every scenario is unique.
Strategy isn’t about finding one right number;
it’s about viewing the data through multiple lenses.

Recently, I was developing a proposal built on a handful of similar trials.*
One dataset showed an unusually high average enrollment rate—
which, on paper, meant shorter timelines and lower costs.
But a few other trials had much longer timelines, and much lower rates.
Something felt… amiss.
[Really, I just wanted to use the term amiss.]

Sure enough, the median enrollment rate was far lower—
a difference between one patient per month and one patient every four.
The culprit? An outlier investigator enrolling seven times more patients than the rest.
A great outlier, but an outlier nonetheless.
That single site inflated the average,
And risked me over-promising on future performance.

In the end, after discussing risk tolerance with the study teams,
I used a rate closer to the median—
a longer timeline, but one that reflected a true distribution of the data.

My takeaway? Both rates were valuable.
The average highlighted where high-performers could accelerate enrollment.
The median provided more realistic expectations across the full network of potential sites.
Together, they made for a more well-informed, data-driven decision,
One that didn’t require losing any sleep. 

“Similar” always deserves an asterisk—study design, drug class, competition, investigator excitement, and a dozen other factors all shape the outcome. But that’s a post for another day.