Suppose you are in charge of a government initiative with a large budget and lots of visibility. And that in launching the initiative you make a splash around the country. You host several town halls promoting it. You take every chance to talk to the media to extol the virtues of what you believe is a change-making program. You oversell it because you believe there is so much promise in it that, why not? Better to rally the troops and get people behind the idea. What is the harm in a little hype?
But then as time goes by you start noticing some problems. Remember that large budget? Your ministry takes a long time to actually sort out who’s going be entrusted with it. You want to emphasize competence, but you know, politics and all. You’ve got to be a little flexible. Then, the lucky recipients of that funding, who had no trouble meeting your ambitious visions with bold proposals, can’t keep up with their own plans. Spending on the program’s activities falls far behind timelines. Investments are delayed and those delays are rationalized. Fanciful projections of outcomes are not met. Arguably some of the metrics chosen to monitor progress were flawed to begin with. A few years into the initiative’s presumed life cycle, it becomes obvious that it is not delivering. All of that is pointed out to you by an independent review. The facts are laid out, publicly, for everyone to see. Your plan sucked to begin with, you got carried away, and there’s no way in hell this looks like a good use of public funds.
What do you do?
For a lot of people this would be time for a little introspection. Reassess. Cut your losses. Pivot.
But not you. You, my obstinate friend, decide to double down on the program. You not only renew your commitment to it, but you rebrand it to make it sound even bolder. You work hard to carve another decent chunk of change out of the government’s budget to continue the initiative. With the same partners. Following more or less the same format. Still lacking credible metrics.
Moreover, you hire a consulting company to “evaluate” the program, but you don’t want one of those reports that are real downers. Let’s focus on the positives. We need to show everyone how good this can be, because that is the strength of your ideas: They have real potential. Maybe they can identify results that may not have happened yet (it takes time!), but that your partners anticipate will happen. What’s wrong with having a little faith in your partners? They are on the ground. They are your fellow travellers on this journey. They want the best for Canada. So, you publish the consultant’s evaluation, which proudly displays how much has been achieved or people anticipate will be achieved on numerous fronts. Why bother disaggregating facts from expectations? Perception is reality after all.
While this may sound far-fetched, this is the story of the Innovation Superclusters Initiative, which launched in 2017.
Do you remember now? How it struggled to get off the ground. How fanciful the program rhetoric and cluster proposals were. How overhyped its projections have been from the start. How poorly run it has been and how it has lacked in tangible, measurable results. How once beneficiaries were reliant on federal funding, they actively lobbied the government for more even as they underperformed. How the supercluster “economic analysis report” by a big-five consulting company, for reasons that are decidedly unrelated to good survey methodology, reported responses that combine observed outcomes with anticipated outcomes, leaving the reader unsure as to what is real or imaginary. And how, after apparently fading, it has been renewed with a fresh federal commitment, no longer as Canada’s Innovation Supercluster Initiative but as “Global Innovation Clusters.”
You wouldn’t think you would be able to do this in Canada, my friend. We are a serious country. But happen it did. The question is: how? And more importantly, why?