Today’s guest post comes from our colleague Heywood Sanders, Professor at the University of Texas San Antonio, and author of Convention Center Follies.


Lots of people make guesses about the future. So do cities. And cities often employ “expert” consultants, who presumably have a wealth of knowledge and expertise to inform their guesses, and provide more accurate and precise forecasts of the future.

But those forecasts don’t always prove accurate and effective. And consultants may be prone to telling city leaders what they’d prefer to hear, sometimes leading to dire consequences for the cities and their residents.

One cautionary tale took place in Austin, Texas. In the mid-1990s, Austin was considering expanding its convention center, so it hired a consultant, Charles H. Johnson, to make two separate reports forecasting the effects of such a project. Both studies depicted a glowing future if the convention center expansion was built: double the convention events, double the attendance, double the hotel room nights. The city, presumably at least partly thanks to these figures, went ahead with it.

Credit: Earl McGehee, Flickr
Credit: Earl McGehee, Flickr

 

Almost two decades later, the city wants to do it again—but their own analysis shows that Austin has yet to see all the predicted benefits from the first round of expansions.

This year, Austin has once again contracted with Johnson to make a recommendation about building an even bigger convention center. Not surprisingly, the 2015 report is rosy about the expansion. But it also provides actual attendance figures for the first round of expansions, allowing us to compare Johnson’s projections from the 1990s with what has actually happened.

The comparison is not flattering. Where the 1990s reports forecast 98 annual conventions and trade shows, the center just managed to land 40 in 2013. Johnson had also forecast that expansion would more than double attendance figures, from 150,000 to 329,00. But the expanded Austin center housed 186,675 convention and trade show attendees in 2013, the most recent year in the new report. Hotel room nights likewise fell far short of projections.

On top of that, it turns out that not only does history repeat, but so do dubious projections of the future. The 2015 report suggests that in “Year 8” of the newly proposed convention center expansion, there will be 311,000 hotel room nights—roughly what the 1990s analysis projected for the first round of expansion.

Nor is Austin the only city to find itself in this position. Johnson himself also worked on a report for a proposed Boston convention center, suggesting that it would produce 794,000 hotel room nights by 2012. While the center that eventually was built was somewhat smaller than the one Johnson analyzed (at 516,000 rather than 600,000 square feet), it generated only a fraction of the business: just under 265,000 hotel room nights in 2014. And Dallas Magazine detailed some of the projections and accounting shenanigans surrounding that city’s convention expansions earlier this year.

You might think that someone in Austin—perhaps the Austin Convention Center director, the city manager and staff, or the city’s mayor and council—would bother to check on the track record of convention center projections, and those of Johnson in particular, before commissioning a study. You might ask how Johnson came up with his projection for the currently proposed expansion. And you might wonder how an “expert” consultant gets to be considered “expert.”

There is clearly a danger of “selection bias” going on here.  The municipalities that commission economic impact studies and forecasts are looking for a justification to build these facilities. Typically they are sponsored by convention and visitor bureaus, or other special purpose entities with a strong vested interest. They choose the consultants to conduct the studies. Confronted with a choice of consultants who invariably produce high numbers and go forward recommendations and other consultants who are more pessimistic and cautious, it’s likely that those commissioning the studies will choose the more optimistic firms. Over time, this will weed out the pessimists, and only optimists will be left. This theory has some support in academic research.

When cities commission feasibility studies, and especially when the results of those studies will guide the use of millions of dollars of public money, there ought to be some reason to believe that those reports will be accurate. Part of that is looking at the track record of similar studies by the same authors and using the same methodology. Cities and voters should be able to evaluate the people being hired both for their reliability—how close their projections are to observed outcomes—and their bias, or whether their projections consistently over- or under-shoot actual results.

Without some assurance of reasonable accuracy on these fronts, it’s hard to know why cities should continue to base major economic development investment decisions on these often faulty studies.