Last week, we noticed a small item on Streetsblog: “Where Walk Score falls short.” Because we’re keenly interested in to walkability, and routinely use Walk Score to benchmark walkable places, we clicked the link.
It took us to a blog entry from Mariela Alfonzo asking “Does walk score walk the walk?” Dr. Alfonzo has been doing walkability research for a number of years; she collaborated with Chris Leinberger in applying something called the Irvine/Minnesota Inventory to measure walkability in Washington DC, and has since launched a consulting firm – “State of Place — that sells walkability measures.
Her critique of Walk Score is based on an analysis of three instances in Washington DC suburbs where places have high walk scores, but poor walkability, as measured by her firm’s proprietary “state of place” scoring system. She argues that its “irresponsible and potentially discriminating” to use Walk Score “to make planning, private investment or public funding or policy decisions.”
That’s a pretty strong claim. And one frankly that we profoundly disagree with.
That’s not to say that we think Walk Score is perfect. It isn’t. At City Observatory, we’ve never hesitated to critique its limitations–we flagged a problem with its city rankings last year, for example.
To their credit, Matt Lerner and the team at Walk Score have always been utterly open and transparent about the limits of their data and algorithms, and have made changes to address those concerns (substituting street-smart distance measures for straight line calculations, for example). As a result of these steady improvements and despite its limitations, Walk Score has done more to advance interest in and awareness of walkability than any–and perhaps all–of the academic research on the subject. And that’s one of the great things about Redfin, the new owners of Walk Score: despite the fact that they’re a private, profit-making company, they continue to provide Walk Score for free, and make its workings completely transparent to everyone. And they’ve gladly worked with all comers to make it better and to advance the field. They’re real models of how to move markets and do good.
As we’ve noted at City Observatory, for too long our transportation discussions have been subtly but powerfully slanted by the dominance of car-oriented system metrics—average daily traffic, level of service, hours of delay. What’s long been missing from urban planning and transportation investment decisions is clear metrics that characterize the role of walkability in contributing to livability and other public policy goals. Walk Score helps level the quantitative playing field.
Walk Score is simple, transparent, ubiquitous and free. It rates every address in the United States on a scale of 0-100, let’s users drill-down to a neighborhood level map that shows what’s driving the walk score for a particular location, and doesn’t cost a dime to use. Walk Score serves up over 20 million scores a day, and more than 30,000 web sites use its services. Our own research, and that of others, shows that Walk Score has a measurable impact on home values. Walk Score works, it gets used, it has been validated independently, and it has changed real estate markets. Everyone who cares about walkability owes its creators, including Alan Durning at Sightline Institute who brainstormed the idea, and Matt Lerner, Mike Mathieu and the team at Walk Score who turned it into a reality—a huge debt of gratitude for their work.
At her firm, Mariela Alfonzo is pitching its proprietary methodology for assessing walkability. For a fee, they’ll come to your community and gather data on-site, cataloging 280 different aspects of the street scape, from the numbers and types of businesses, to crosswalk markings and materials, building setbacks, the presence of loose dogs, unpleasant smells and graffiti. (You can read the entire list on pages 75 to 84 of this PDF).
It’s fine to suggest that there might be other variables that affect walkability. And yes sidewalks, landscaping, building facades, road widths and traffic levels all matter. None of this is to argue that Walk Score is perfect, or can’t be improved, or that other measures can’t augment its assessment of walkability in particular locations. For example, the kind of work that Jan Gehl has pioneered shows how we can bring meticulous data collection and a design sensibility to the challenge of improving walkability and public spaces. There’s still a lot more to learn in this field.
It’s great that the folks at State of Place have developed their own methodology for assessing walkability. It undoubtedly will be useful in many situations. These metrics, however, are highly complex, extremely labor intensive to gather, and consequently very expensive. And they simply haven’t been gathered in enough places to have the kind of track record that would let an objective third party assess their utility.
Is State of Place Really Better?
Let’s ask a really fundamental question: Does all the added effort of collecting more than 280 data points for each street segment add significantly to our understanding of which places are walkable and which aren’t? For a real world test of that question, I examined a recent report—one of just a handful using the State of Place methodology to score a neighborhood. The data come from a study that State of Place conducted –with help from Portland State University planning students in the suburb of Tigard, Oregon. The study focused on the Tigard Triangle, a neighborhood bounded on each side by three major highways. The summary finding of the SOP report was that the Tigard Triangle had an average State of Place Score of 33, compared to the State of Place score for the city’s old downtown neighborhood of 66.
So how does that compare with what you could learn from Walk Score? While Walk Score’s main feature is the ability to look up walkability for a particular address, one of the great things about Walk Score is that they also use locally designated neighborhood boundaries to map and summarize walkability. So for Tigard, you can easily compare the average walkability of the downtown (Neighborhood Area 5) with that of the Tigard Triangle (Neighborhood Area 9). And Walk Score produces almost identical results to those reported by State of Place: downtown’s walk score is 62; the triangle’s walk score is 38.
[table id=3 /]
In a way, that’s a kind of heartening result: It implies that the definitions of walkability used by the two methodologies are, at some level, broadly congruent. But if you’re being asked to pay for “State of Place” you might want to ask what it is providing that Walk Score doesn’t, and whether it’s worth it.
Different Horses for Different Courses
There’s plenty of room for many different, complementary and multi-scaled measures of walkability. Detailed, site level assessments may make sense for some purposes. Broad system-level measures that encompass entire neighborhoods, cities and metropolitan areas, and which are comparable–and available–for the whole nation also play a role.
Rather than a tendentious critique of Walk Score, this field would be better served by acknowledging that Walk Score is a terrific resource than provides a foundation for understanding and promoting walkability as a public policy objective, and that it creates a market for the more nuanced and frankly much more expensive kinds of tools that State of Place wants communities to buy.
There’s a growing appreciation that walkability is one of the most important and valuable aspects of great urban places. For a long time, we’ve had to make the case for walkability, relying mostly on the lyrical writings of Jane Jacobs and others, who described how a vibrant streetscape makes places livelier, more interesting, safer and more economically productive. While that message resonated with a faithful few, it didn’t carry much weight in the numbers and engineering dominated world of policy. What’s been missing is the robust, universal and quantitative set of measures that put walking on an equal footing (sorry about the pun) with all of the quantitative measures used to promote auto mobility. If you’re really a passionate supporter of walkability, you ought to celebrate that accomplishment and build on it, rather than taking needless and poorly grounded swipes at its limitations.
Its poor form in any business to get ahead by bad-mouthing your competitors. Its even worse in this situation, when over-the-top claims about Walk Score being “irresponsible” actually undercut a shared common interest in promoting greater awareness of and knowledge about walkability. There’s plenty of work to be done in this field, and room for many different and complementary sets of tools: let’s work together to build up the field, rather than needlessly tearing down the one tool that’s actually worked.
Note: This was post revised October 29 to correct the spelling of Mariela Alfonzo’s last name; We’ve also referred to her as Dr. Alfonzo, at her request. She has submitted a letter disagreeing with this commentary, which we make available to City Observatory readers here: Dr. Alfonzo’s Reply.
[…] How not to improve walkability measures (City Observatory) […]